DEVELOPMENT OF A WATERSHED-BASED MERCURY POLLUTION CHARACTERIZATION SYSTEM
To investigate total mercury loadings to streams in a watershed, we have developed a watershed-based source quantification model ? Watershed Mercury Characterization System. The system uses the grid-based GIS modeling technology to calculate total soil mercury concentrations and ...
Using Virtual Testing for Characterization of Composite Materials
NASA Astrophysics Data System (ADS)
Harrington, Joseph
Composite materials are finally providing uses hitherto reserved for metals in structural systems applications -- airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. They have high strength-to-weight ratios, are durable and resistant to environmental effects, have high impact strength, and can be manufactured in a variety of shapes. Generalized constitutive models are being developed to accurately model composite systems so they can be used in implicit and explicit finite element analysis. These models require extensive characterization of the composite material as input. The particular constitutive model of interest for this research is a three-dimensional orthotropic elasto-plastic composite material model that requires a total of 12 experimental stress-strain curves, yield stresses, and Young's Modulus and Poisson's ratio in the material directions as input. Sometimes it is not possible to carry out reliable experimental tests needed to characterize the composite material. One solution is using virtual testing to fill the gaps in available experimental data. A Virtual Testing Software System (VTSS) has been developed to address the need for a less restrictive method to characterize a three-dimensional orthotropic composite material. The system takes in the material properties of the constituents and completes all 12 of the necessary characterization tests using finite element (FE) models. Verification and validation test cases demonstrate the capabilities of the VTSS.
SCaLeM: A Framework for Characterizing and Analyzing Execution Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Manzano Franco, Joseph B.; Krishnamoorthy, Sriram
2014-10-13
As scalable parallel systems evolve towards more complex nodes with many-core architectures and larger trans-petascale & upcoming exascale deployments, there is a need to understand, characterize and quantify the underlying execution models being used on such systems. Execution models are a conceptual layer between applications & algorithms and the underlying parallel hardware and systems software on which those applications run. This paper presents the SCaLeM (Synchronization, Concurrency, Locality, Memory) framework for characterizing and execution models. SCaLeM consists of three basic elements: attributes, compositions and mapping of these compositions to abstract parallel systems. The fundamental Synchronization, Concurrency, Locality and Memory attributesmore » are used to characterize each execution model, while the combinations of those attributes in the form of compositions are used to describe the primitive operations of the execution model. The mapping of the execution model’s primitive operations described by compositions, to an underlying abstract parallel system can be evaluated quantitatively to determine its effectiveness. Finally, SCaLeM also enables the representation and analysis of applications in terms of execution models, for the purpose of evaluating the effectiveness of such mapping.« less
Workload Characterization of CFD Applications Using Partial Differential Equation Solvers
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.
Siggers, Keri A; Lesser, Cammie F
2008-07-17
Microbial pathogens utilize complex secretion systems to deliver proteins into host cells. These effector proteins target and usurp host cell processes to promote infection and cause disease. While secretion systems are conserved, each pathogen delivers its own unique set of effectors. The identification and characterization of these effector proteins has been difficult, often limited by the lack of detectable signal sequences and functional redundancy. Model systems including yeast, worms, flies, and fish are being used to circumvent these issues. This technical review details the versatility and utility of yeast Saccharomyces cerevisiae as a system to identify and characterize bacterial effectors.
Modeling and characterization of supercapacitors for wireless sensor network applications
NASA Astrophysics Data System (ADS)
Zhang, Ying; Yang, Hengzhao
A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.
Health Monitoring for Airframe Structural Characterization
NASA Technical Reports Server (NTRS)
Munns, Thomas E.; Kent, Renee M.; Bartolini, Antony; Gause, Charles B.; Borinski, Jason W.; Dietz, Jason; Elster, Jennifer L.; Boyd, Clark; Vicari, Larry; Ray, Asok;
2002-01-01
This study established requirements for structural health monitoring systems, identified and characterized a prototype structural sensor system, developed sensor interpretation algorithms, and demonstrated the sensor systems on operationally realistic test articles. Fiber-optic corrosion sensors (i.e., moisture and metal ion sensors) and low-cycle fatigue sensors (i.e., strain and acoustic emission sensors) were evaluated to validate their suitability for monitoring aging degradation; characterize the sensor performance in aircraft environments; and demonstrate placement processes and multiplexing schemes. In addition, a unique micromachined multimeasure and sensor concept was developed and demonstrated. The results show that structural degradation of aircraft materials could be effectively detected and characterized using available and emerging sensors. A key component of the structural health monitoring capability is the ability to interpret the information provided by sensor system in order to characterize the structural condition. Novel deterministic and stochastic fatigue damage development and growth models were developed for this program. These models enable real time characterization and assessment of structural fatigue damage.
Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming
2015-01-01
High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.
Material characterization and modeling with shear ography
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Callahan, Virginia
1993-01-01
Shearography has emerged as a useful technique for nondestructible evaluation and materials characterization of aerospace materials. A suitable candidate for the technique is to determine the response of debonds on foam-metal interfaces such as the TPS system on the External Tank. The main thrust is to develop a model which allows valid interpretation of shearographic information on TPS type systems. Confirmation of the model with shearographic data will be performed.
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Viles, Charles L.; Park, Stephen K.; Reichenbach, Stephen E.; Sieracki, Michael E.
1992-01-01
Consideration is given to a model-based method for estimating the spatial frequency response of a digital-imaging system (e.g., a CCD camera) that is modeled as a linear, shift-invariant image acquisition subsystem that is cascaded with a linear, shift-variant sampling subsystem. The method characterizes the 2D frequency response of the image acquisition subsystem to beyond the Nyquist frequency by accounting explicitly for insufficient sampling and the sample-scene phase. Results for simulated systems and a real CCD-based epifluorescence microscopy system are presented to demonstrate the accuracy of the method.
Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A
2017-08-04
This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.
Telerobotic system performance measurement - Motivation and methods
NASA Technical Reports Server (NTRS)
Kondraske, George V.; Khoury, George J.
1992-01-01
A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, Manuel
The objective of this work was to characterize the neutron irradiation system consisting of americium-241 beryllium (241AmBe) neutron sources placed in a polyethylene shielding for use at Sandia National Laboratories (SNL) Low Dose Rate Irradiation Facility (LDRIF). With a total activity of 0.3 TBq (9 Ci), the source consisted of three recycled 241AmBe sources of different activities that had been combined into a single source. The source in its polyethylene shielding will be used in neutron irradiation testing of components. The characterization of the source-shielding system was necessary to evaluate the radiation environment for future experiments. Characterization of the sourcemore » was also necessary because the documentation for the three component sources and their relative alignment within the Special Form Capsule (SFC) was inadequate. The system consisting of the source and shielding was modeled using Monte Carlo N-Particle transport code (MCNP). The model was validated by benchmarking it against measurements using multiple techniques. To characterize the radiation fields over the full spatial geometry of the irradiation system, it was necessary to use a number of instruments of varying sensitivities. First, the computed photon radiography assisted in determining orientation of the component sources. With the capsule properly oriented inside the shielding, the neutron spectra were measured using a variety of techniques. A N-probe Microspec and a neutron Bubble Dosimeter Spectrometer (BDS) set were used to characterize the neutron spectra/field in several locations. In the third technique, neutron foil activation was used to ascertain the neutron spectra. A high purity germanium (HPGe) detector was used to characterize the photon spectrum. The experimentally measured spectra and the MCNP results compared well. Once the MCNP model was validated to an adequate level of confidence, parametric analyses was performed on the model to optimize for potential experimental configurations and neutron spectra for component irradiation. The final product of this work is a MCNP model validated by measurements, an overall understanding of neutron irradiation system including photon/neutron transport and effective dose rates throughout the system, and possible experimental configurations for future irradiation of components.« less
Hydrogeological Characterization of the Middle Magdalena Valley - Colombia
NASA Astrophysics Data System (ADS)
Arenas, Maria Cristina; Riva, Monica; Donado, Leonardo David; Guadagnini, Alberto
2017-04-01
We provide a detailed hydrogeological characterization of the complex aquifer system of the Middle Magdalena Valley, Colombia. The latter is comprised by 3 sub-basins within which 7 blocks have been identified for active exploration and potential production of oil and gas. As such, there is a critical need to establish modern water resources management practices in the area to accommodate the variety of social, environmental and industrial needs. We do so by starting from a detailed hydrogeological characterization of the system and focus on: (a) a detailed hydrogeological reconnaissance of the area leading to the definition of the main hydrogeological units; (b) the collection, organization and analysis of daily climatic data from 39 stations available in the region; and (c) the assessment of the groundwater flow circulation through the formulation of a conceptual and a mathematical model of the subsurface system. Groundwater flow is simulated in the SAM 1.1 aquifer located in the Middle Magdalena Valley with the objective of showing and evaluating alternative conceptual hydrogeological modeling alternatives. We focus here on modeling results at system equilibrium (i.e., under steady-state conditions) and assess the value of available information in the context of the candidate modeling strategies we consider. Results of our modeling effort are conducive to the characterization of the distributed hydrogeological budget and the assessment of critical areas as a function of the conceptualization of the system functioning and data avilability.
Development of a detector model for generation of synthetic radiographs of cargo containers
NASA Astrophysics Data System (ADS)
White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.
2008-05-01
Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
On looking into the Black Box: Prospects and Limits in the Search for Mental Models
1985-05-01
particularly in terms of the ways in which humans understand systems. Norman (19831 characterizes this understanding as messy, sloppy, incomplete, and...Kleinman, et al., 1971]). However, for tasks involving only monitoring [ Smallwood , 1967; Sheridan, 1970], especially when apparent discontinuities...18 Norman [1983] uses the word "conceptaalization" to characterize researchers’ models of humans’ mental models. This characterization serves to
Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan
2010-02-01
The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.
Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.
Haimes, Yacov Y
2018-01-01
The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
Color reproduction system based on color appearance model and gamut mapping
NASA Astrophysics Data System (ADS)
Cheng, Fang-Hsuan; Yang, Chih-Yuan
2000-06-01
By the progress of computer, computer peripherals such as color monitor and printer are often used to generate color image. However, cross media color reproduction by human perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human psychology. In this thesis, a color reproduction system based on color appearance model and gamut mapping is proposed. It consists of four parts; device characterization, color management technique, color appearance model and gamut mapping.
AVIRIS data and neural networks applied to an urban ecosystem
NASA Technical Reports Server (NTRS)
Ridd, Merrill K.; Ritter, Niles D.; Bryant, Nevin A.; Green, Robert O.
1992-01-01
Urbanization is expanding on every continent. Although urban/industrial areas occupy a small percentage of the total landscape of the earth, their influence extends far beyond their borders, affecting terrestrial, aquatic, and atmospheric systems globally. Yet little has been done to characterize urban ecosystems of their linkages to other systems horizontally or vertically. With remote sensing we now have the tools to characterize, monitor, and model urban landscapes world-wide. However, the remote sensing performed on cities so far has concentrated on land-use patterns as distinct from land-cover or composition. The popular Anderson system is entirely land-use oriented in urban areas. This paper begins with the premise that characterizing the biophysical composition of urban environments is fundamental to understanding urban/industrial ecosystems, and, in turn, supports the modeling of other systems interfacing with urban systems. Further, it is contended that remote sensing is a tool poised to provide the biophysical composition data to characterize urban landscapes.
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Characterizing and modeling the dynamics of online popularity.
Ratkiewicz, Jacob; Fortunato, Santo; Flammini, Alessandro; Menczer, Filippo; Vespignani, Alessandro
2010-10-08
Online popularity has an enormous impact on opinions, culture, policy, and profits. We provide a quantitative, large scale, temporal analysis of the dynamics of online content popularity in two massive model systems: the Wikipedia and an entire country's Web space. We find that the dynamics of popularity are characterized by bursts, displaying characteristic features of critical systems such as fat-tailed distributions of magnitude and interevent time. We propose a minimal model combining the classic preferential popularity increase mechanism with the occurrence of random popularity shifts due to exogenous factors. The model recovers the critical features observed in the empirical analysis of the systems analyzed here, highlighting the key factors needed in the description of popularity dynamics.
Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.
Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N
2016-01-01
A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.
Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results
Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.
2016-01-01
A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575
NASA Astrophysics Data System (ADS)
Anaya, A. A.; Padilla, I. Y.; Macchiavelli, R. E.
2011-12-01
Karst groundwater systems are highly productive and provide an important fresh water resource for human development and ecological integrity. Their high productivity is often associated with conduit flow and high matrix permeability. The same characteristics that make these aquifers productive also make them highly vulnerable to contamination and a likely for contaminant exposure. Of particular interest are chlorinated organic contaminants and phthalates derived from industrial solvents and plastic by-products. These chemicals have been identified as potential precursors of pre-term birth, a leading cause of neonatal complications with a significant health and societal cost. The general objectives of this work are to: (1) develop fundamental knowledge and determine the processes controlling the release, mobility, persistence, and possible pathways of contaminants in karst groundwater systems, and (2) characterize transport processes in conduit and diffusion-dominated flow under base flow and storm flow conditions. The work presented herein focuses on the development of geo-hydro statistical tools to characterize flow and transport processes under different flow regimes. Multidimensional, laboratory-scale Geo-Hydrobed models were developed and tested for this purpose. The models consist of stainless-steel tanks containing karstified limestone blocks collected from the karst aquifer formation of northern Puerto Rico. The models a network of sampling wells to monitor flow, pressure, and solute concentrations temporally and spatially. Experimental work entailed making a series of point injections in wells while monitoring the hydraulic response in other wells. Statistical mixed models were applied to spatial probabilities of hydraulic response and weighted injected volume data, and were used to determinate the best spatial correlation structure to represent paths of preferential flow in the limestone units under different groundwater flow regimes. Preliminary testing of the karstified models show that the system can be used to represent the variable transport regime characterized by conduit and diffuses flow in the karst systems. Initial hydraulic characterization indicates a highly heterogeneous system resulting in large preferential flow components. Future works involve characterization of dual porosity system using conservative tracers, fate and transport experiments using phthalates and chlorinated solvents, geo-temporal statistical modeling, and the testing of "green" remediation technologies in karst groundwater. This work is supported by the U.S. Department of Energy, Savannah River (Grant Award No. DE-FG09-07SR22571), and the National Institute of Environmental Health Sciences (NIEHS, Grant Award No. P42ES017198).
Nishikawa, Tracy
1997-01-01
Two alternative conceptual models of the physical processes controlling seawater intrusion in a coastal basin in California, USA, were tested to identify a likely principal pathway for seawater intrusion. The conceptual models were tested by using a two-dimensional, finite-element groundwater flow and transport model. This pathway was identified by the conceptual model that best replicated the historical data. The numerical model was applied in cross section to a submarine canyon that is a main avenue for seawater to enter the aquifer system underlying the study area. Both models are characterized by a heterogeneous, layered, water-bearing aquifer. However, the first model is characterized by flat-lying aquifer layers and by a high value of hydraulic conductivity in the basal aquifer layer, which is thought to be a principal conduit for seawater intrusion. The second model is characterized by offshore folding, which was modeled as a very nearshore outcrop, thereby providing a shorter path for seawater to intrude. General conclusions are that: 1) the aquifer system is best modeled as a flat, heterogeneous, layered system; 2) relatively thin basal layers with relatively high values of hydraulic conductivity are the principal pathways for seawater intrusion; and 3) continuous clay layers of low hydraulic conductivity play an important role in controlling the movement of seawater.
A forward model-based validation of cardiovascular system identification
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.
2001-01-01
We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.
Analyses of ACPL thermal/fluid conditioning system
NASA Technical Reports Server (NTRS)
Stephen, L. A.; Usher, L. H.
1976-01-01
Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.
Miyamoto, Tadayoshi; Manabe, Kou; Ueda, Shinya; Nakahara, Hidehiro
2018-05-01
What is the central question of this study? The lack of useful small-animal models for studying exercise hyperpnoea makes it difficult to investigate the underlying mechanisms of exercise-induced ventilatory abnormalities in various disease states. What is the main finding and its importance? We developed an anaesthetized-rat model for studying exercise hyperpnoea, using a respiratory equilibrium diagram for quantitative characterization of the respiratory chemoreflex feedback system. This experimental model will provide an opportunity to clarify the major determinant mechanisms of exercise hyperpnoea, and will be useful for understanding the mechanisms responsible for abnormal ventilatory responses to exercise in disease models. Exercise-induced ventilatory abnormalities in various disease states seem to arise from pathological changes of respiratory regulation. Although experimental studies in small animals are essential to investigate the pathophysiological basis of various disease models, the lack of an integrated framework for quantitatively characterizing respiratory regulation during exercise prevents us from resolving these problems. The purpose of this study was to develop an anaesthetized-rat model for studying exercise hyperpnoea for quantitative characterization of the respiratory chemoreflex feedback system. In 24 anaesthetized rats, we induced muscle contraction by stimulating bilateral distal sciatic nerves at low and high voltage to mimic exercise. We recorded breath-by-breath respiratory gas analysis data and cardiorespiratory responses while running two protocols to characterize the controller and plant of the respiratory chemoreflex. The controller was characterized by determining the linear relationship between end-tidal CO 2 pressure (P ETC O2) and minute ventilation (V̇E), and the plant by the hyperbolic relationship between V̇E and P ETC O2. During exercise, the controller curve shifted upward without change in controller gain, accompanying increased oxygen uptake. The hyperbolic plant curve shifted rightward and downward depending on exercise intensity as predicted by increased metabolism. Exercise intensity-dependent changes in operating points (V̇E and P ETC O2) were estimated by integrating the controller and plant curves in a respiratory equilibrium diagram. In conclusion, we developed an anaesthetized-rat model for studying exercise hyperpnoea, using systems analysis for quantitative characterization of the respiratory system. This novel experimental model will be useful for understanding the mechanisms responsible for abnormal ventilatory responses to exercise in disease models. © 2018 Morinomiya University of Medical Sciences. Experimental Physiology © 2018 The Physiological Society.
System level modeling and component level control of fuel cells
NASA Astrophysics Data System (ADS)
Xue, Xingjian
This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the optimal design of tubular SOFC. With the system-level dynamic model as a basis, a framework for the robust, online monitoring of PEM fuel cell is developed in the dissertation. The monitoring scheme employs the Hotelling T2 based statistical scheme to handle the measurement noise and system uncertainties and identifies the fault conditions through a series of self-checking and conformal testing. A statistical sampling strategy is also utilized to improve the computation efficiency. Fuel/gas flow control is the fundamental operation for fuel cell energy systems. In the final part of the dissertation, a high-precision and robust tracking control scheme using piezoelectric actuator circuit with direct hysteresis compensation is developed. The key characteristic of the developed control algorithm includes the nonlinear continuous control action with the adaptive boundary layer strategy.
River Devices to Recover Energy with Advanced Materials (River DREAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMahon, Daniel P.
2013-07-03
The purpose of this project is to develop a generator called a Galloping Hydroelectric Energy Extraction Device (GHEED). It uses a galloping prism to convert water flow into linear motion. This motion is converted into electricity via a dielectric elastomer generator (DEG). The galloping mechanism and the DEG are combined to create a system to effectively generate electricity. This project has three research objectives: 1. Oscillator development and design a. Characterize galloping behavior, evaluate control surface shape change on oscillator performance and demonstrate shape change with water flow change. 2. Dielectric Energy Generator (DEG) characterization and modeling a. Characterize andmore » model the performance of the DEG based on oscillator design 3. Galloping Hydroelectric Energy Extraction Device (GHEED) system modeling and integration a. Create numerical models for construction of a system performance model and define operating capabilities for this approach Accomplishing these three objectives will result in the creation of a model that can be used to fully define the operating parameters and performance capabilities of a generator based on the GHEED design. This information will be used in the next phase of product development, the creation of an integrated laboratory scale generator to confirm model predictions.« less
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Characterization of structural connections using free and forced response test data
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Huckelbridge, Arthur A.
1989-01-01
The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.
NASA Technical Reports Server (NTRS)
Korsmeyer, David J.; Pinon, Elfego, III; Oconnor, Brendan M.; Bilby, Curt R.
1990-01-01
The documentation of the Trajectory Generation and System Characterization Model for the Cislunar Low-Thrust Spacecraft is presented in Technical and User's Manuals. The system characteristics and trajectories of low thrust nuclear electric propulsion spacecraft can be generated through the use of multiple system technology models coupled with a high fidelity trajectory generation routine. The Earth to Moon trajectories utilize near Earth orbital plane alignment, midcourse control dependent upon the spacecraft's Jacobian constant, and capture to target orbit utilizing velocity matching algorithms. The trajectory generation is performed in a perturbed two-body equinoctial formulation and the restricted three-body formulation. A single control is determined by the user for the interactive midcourse portion of the trajectory. The full spacecraft system characteristics and trajectory are provided as output.
A situation-response model for intelligent pilot aiding
NASA Technical Reports Server (NTRS)
Schudy, Robert; Corker, Kevin
1987-01-01
An intelligent pilot aiding system needs models of the pilot information processing to provide the computational basis for successful cooperation between the pilot and the aiding system. By combining artificial intelligence concepts with the human information processing model of Rasmussen, an abstraction hierarchy of states of knowledge, processing functions, and shortcuts are developed, which is useful for characterizing the information processing both of the pilot and of the aiding system. This approach is used in the conceptual design of a real time intelligent aiding system for flight crews of transport aircraft. One promising result was the tentative identification of a particular class of information processing shortcuts, from situation characterizations to appropriate responses, as the most important reliable pathway for dealing with complex time critical situations.
Antolín, Diego; Calvo, Belén; Martínez, Pedro A.
2017-01-01
This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units—characterization and modelling—are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions. PMID:28777330
NASA Technical Reports Server (NTRS)
Wong, J. T.; Andre, W. L.
1981-01-01
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.
Propagation issues for emerging mobile and portable communications: A systems perspective
NASA Technical Reports Server (NTRS)
Golshan, Nasser
1993-01-01
The viewpoint of a system engineer regarding the format of propagation information and models suitable for the design of mobile and portable satellite communications systems for the following services: audio broadcast, two way voice, and packet data is presented. Topics covered include: propagation impairments for portable indoor reception in satellite communications systems; propagation impairments and mitigation techniques for mobile satellite communications systems; characterization of mobile satellite communications channels in the presence of roadside blockage when interleaving and FEC coding are implemented; characterization of short-term mobile satellite signal variations; and characterization of long-term signal variations.
Mathematical model comparing of the multi-level economics systems
NASA Astrophysics Data System (ADS)
Brykalov, S. M.; Kryanev, A. V.
2017-12-01
The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.
Automated clustering-based workload characterization
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Menasce, Daniel A.; Yesha, Yelena
1996-01-01
The demands placed on the mass storage systems at various federal agencies and national laboratories are continuously increasing in intensity. This forces system managers to constantly monitor the system, evaluate the demand placed on it, and tune it appropriately using either heuristics based on experience or analytic models. Performance models require an accurate workload characterization. This can be a laborious and time consuming process. It became evident from our experience that a tool is necessary to automate the workload characterization process. This paper presents the design and discusses the implementation of a tool for workload characterization of mass storage systems. The main features of the tool discussed here are: (1)Automatic support for peak-period determination. Histograms of system activity are generated and presented to the user for peak-period determination; (2) Automatic clustering analysis. The data collected from the mass storage system logs is clustered using clustering algorithms and tightness measures to limit the number of generated clusters; (3) Reporting of varied file statistics. The tool computes several statistics on file sizes such as average, standard deviation, minimum, maximum, frequency, as well as average transfer time. These statistics are given on a per cluster basis; (4) Portability. The tool can easily be used to characterize the workload in mass storage systems of different vendors. The user needs to specify through a simple log description language how the a specific log should be interpreted. The rest of this paper is organized as follows. Section two presents basic concepts in workload characterization as they apply to mass storage systems. Section three describes clustering algorithms and tightness measures. The following section presents the architecture of the tool. Section five presents some results of workload characterization using the tool.Finally, section six presents some concluding remarks.
Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth
2007-01-01
A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.
Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...
Kaunisto, Erik; Marucci, Mariagrazia; Borgquist, Per; Axelsson, Anders
2011-10-10
The time required for the design of a new delivery device can be sensibly reduced if the release mechanism is understood and an appropriate mathematical model is used to characterize the system. Once all the model parameters are obtained, in silico experiments can be performed, to provide estimates of the release from devices with different geometries and compositions. In this review coated and matrix systems are considered. For coated formulations, models describing the diffusional drug release, the osmotic pumping drug release, and the lag phase of pellets undergoing cracking in the coating due to the build-up of a hydrostatic pressure are reviewed. For matrix systems, models describing pure polymer dissolution, diffusion in the polymer and drug release from swelling and eroding polymer matrix formulations are reviewed. Importantly, the experiments used to characterize the processes occurring during the release and to validate the models are presented and discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Metabolic Mapping of Breast Cancer with Multiphoton Spectral and Lifetime Imaging
2007-03-01
spectral and lifetime characterization of NADH may be used to reveal metabolic changes in vivo and has potential to be used as an early diagnostic...combined spectral lifetime imaging modality will help for 5 characterization of breast cancer cells from cell culture based models to a relevant in... spectral and lifetime system and integrated into a multiphoton fluorescence excitation microscopy system 7 • Calibrated and characterized this
Characterization of Metal Matrix Composites
NASA Technical Reports Server (NTRS)
Daniel, I. M.; Chun, H. J.; Karalekas, D.
1994-01-01
Experimental methods were developed, adapted, and applied to the characterization of a metal matrix composite system, namely, silicon carbide/aluminim (SCS-2/6061 Al), and its constituents. The silicon carbide fiber was characterized by determining its modulus, strength, and coefficient of thermal expansion. The aluminum matrix was characterized thermomechanically up to 399 C (750 F) at two strain rates. The unidirectional SiC/Al composite was characterized mechanically under longitudinal, transverse, and in-plane shear loading up to 399 C (750 F). Isothermal and non-isothermal creep behavior was also measured. The applicability of a proposed set of multifactor thermoviscoplastic nonlinear constitutive relations and a computer code was investigated. Agreement between predictions and experimental results was shown in a few cases. The elastoplastic thermomechanical behavior of the composite was also described by a number of new analytical models developed or adapted for the material system studied. These models include the rule of mixtures, composite cylinder model with various thermoelastoplastic analyses and a model based on average field theory. In most cases satisfactory agreement was demonstrated between analytical predictions and experimental results for the cases of stress-strain behavior and thermal deformation behavior at different temperatures. In addition, some models yielded detailed three-dimensional stress distributions in the constituents within the composite.
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
Computer model for characterizing, screening, and optimizing electrolyte systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at themore » INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
Systems pharmacology - Towards the modeling of network interactions.
Danhof, Meindert
2016-10-30
Mechanism-based pharmacokinetic and pharmacodynamics (PKPD) and disease system (DS) models have been introduced in drug discovery and development research, to predict in a quantitative manner the effect of drug treatment in vivo in health and disease. This requires consideration of several fundamental properties of biological systems behavior including: hysteresis, non-linearity, variability, interdependency, convergence, resilience, and multi-stationarity. Classical physiology-based PKPD models consider linear transduction pathways, connecting processes on the causal path between drug administration and effect, as the basis of drug action. Depending on the drug and its biological target, such models may contain expressions to characterize i) the disposition and the target site distribution kinetics of the drug under investigation, ii) the kinetics of target binding and activation and iii) the kinetics of transduction. When connected to physiology-based DS models, PKPD models can characterize the effect on disease progression in a mechanistic manner. These models have been found useful to characterize hysteresis and non-linearity, yet they fail to explain the effects of the other fundamental properties of biological systems behavior. Recently systems pharmacology has been introduced as novel approach to predict in vivo drug effects, in which biological networks rather than single transduction pathways are considered as the basis of drug action and disease progression. These models contain expressions to characterize the functional interactions within a biological network. Such interactions are relevant when drugs act at multiple targets in the network or when homeostatic feedback mechanisms are operative. As a result systems pharmacology models are particularly useful to describe complex patterns of drug action (i.e. synergy, oscillatory behavior) and disease progression (i.e. episodic disorders). In this contribution it is shown how physiology-based PKPD and disease models can be extended to account for internal systems interactions. It is demonstrated how SP models can be used to predict the effects of multi-target interactions and of homeostatic feedback on the pharmacological response. In addition it is shown how DS models may be used to distinguish symptomatic from disease modifying effects and to predict the long term effects on disease progression, from short term biomarker responses. It is concluded that incorporation of expressions to describe the interactions in biological network analysis opens new avenues to the understanding of the effects of drug treatment on the fundamental aspects of biological systems behavior. Copyright © 2016 The Author. Published by Elsevier B.V. All rights reserved.
Failure Time Analysis of Office System Use.
ERIC Educational Resources Information Center
Cooper, Michael D.
1991-01-01
Develops mathematical models to characterize the probability of continued use of an integrated office automation system and tests these models on longitudinal data collected from 210 individuals using the IBM Professional Office System (PROFS) at the University of California at Berkeley. Analyses using survival functions and proportional hazard…
NASA Technical Reports Server (NTRS)
Lai, Steven H.-Y.
1992-01-01
A variational principle and a finite element discretization technique were used to derive the dynamic equations for a high speed rotating flexible beam-mass system embedded with piezo-electric materials. The dynamic equation thus obtained allows the development of finite element models which accommodate both the original structural element and the piezoelectric element. The solutions of finite element models provide system dynamics needed to design a sensing system. The characterization of gyroscopic effect and damping capacity of smart rotating devices are addressed. Several simulation examples are presented to validate the analytical solution.
Behavioral Modeling and Characterization of Nonlinear Operation in RF and Microwave Systems
2005-01-01
the model further reinforces the intuition gained by employing this modeling technique. 84 Chapter 5 Remote Characterization of RF Devices 5.1...was used to extract the power series coefficients, 21 dBm. This further reinforces the conclusion that the nonlinear coefficients should be extracted...are becoming important. The fit of the odd-ordered model reinforces this hypothesis since the phase component of the fit roughly splits the
NASA Astrophysics Data System (ADS)
Wankhede, Mamta
Functional vasculature is vital for tumor growth, proliferation, and metastasis. Many tumor-specific vascular targeting agents (VTAs) aim to destroy this essential tumor vasculature to induce indirect tumor cell death via oxygen and nutrition deprivation. The tumor angiogenesis-inhibiting anti-angiogenics (AIs) and the established tumor vessel targeting vascular disrupting agents (VDAs) are the two major players in the vascular targeting field. Combination of VTAs with conventional therapies or with each other, have been shown to have additive or supra-additive effects on tumor control and treatment. Pathophysiological changes post-VTA treatment in terms of structural and vessel function changes are important parameters to characterize the treatment efficacy. Despite the abundance of information regarding these parameters acquired using various techniques, there remains a need for a quantitative, real-time, and direct observation of these phenomenon in live animals. Through this research we aspired to develop a spectral imaging based mouse tumor system for real-time in vivo microvessel structure and functional measurements for VTA characterization. A model tumor system for window chamber studies was identified, and then combinatorial effects of VDA and AI were characterized in model tumor system. (Full text of this dissertation may be available via the University of Florida Libraries web site. Please check http://www.uflib.ufl.edu/etd.html)
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Analysis of acoustic emission signals and monitoring of machining processes
Govekar; Gradisek; Grabec
2000-03-01
Monitoring of a machining process on the basis of sensor signals requires a selection of informative inputs in order to reliably characterize and model the process. In this article, a system for selection of informative characteristics from signals of multiple sensors is presented. For signal analysis, methods of spectral analysis and methods of nonlinear time series analysis are used. With the aim of modeling relationships between signal characteristics and the corresponding process state, an adaptive empirical modeler is applied. The application of the system is demonstrated by characterization of different parameters defining the states of a turning machining process, such as: chip form, tool wear, and onset of chatter vibration. The results show that, in spite of the complexity of the turning process, the state of the process can be well characterized by just a few proper characteristics extracted from a representative sensor signal. The process characterization can be further improved by joining characteristics from multiple sensors and by application of chaotic characteristics.
Fitness model for the Italian interbank money market.
De Masi, G; Iori, G; Caldarelli, G
2006-12-01
We use the theory of complex networks in order to quantitatively characterize the formation of communities in a particular financial market. The system is composed by different banks exchanging on a daily basis loans and debts of liquidity. Through topological analysis and by means of a model of network growth we can determine the formation of different group of banks characterized by different business strategy. The model based on Pareto's law makes no use of growth or preferential attachment and it reproduces correctly all the various statistical properties of the system. We believe that this network modeling of the market could be an efficient way to evaluate the impact of different policies in the market of liquidity.
Multiphase, multicomponent phase behavior prediction
NASA Astrophysics Data System (ADS)
Dadmohammadi, Younas
Accurate prediction of phase behavior of fluid mixtures in the chemical industry is essential for designing and operating a multitude of processes. Reliable generalized predictions of phase equilibrium properties, such as pressure, temperature, and phase compositions offer an attractive alternative to costly and time consuming experimental measurements. The main purpose of this work was to assess the efficacy of recently generalized activity coefficient models based on binary experimental data to (a) predict binary and ternary vapor-liquid equilibrium systems, and (b) characterize liquid-liquid equilibrium systems. These studies were completed using a diverse binary VLE database consisting of 916 binary and 86 ternary systems involving 140 compounds belonging to 31 chemical classes. Specifically the following tasks were undertaken: First, a comprehensive assessment of the two common approaches (gamma-phi (gamma-ϕ) and phi-phi (ϕ-ϕ)) used for determining the phase behavior of vapor-liquid equilibrium systems is presented. Both the representation and predictive capabilities of these two approaches were examined, as delineated form internal and external consistency tests of 916 binary systems. For the purpose, the universal quasi-chemical (UNIQUAC) model and the Peng-Robinson (PR) equation of state (EOS) were used in this assessment. Second, the efficacy of recently developed generalized UNIQUAC and the nonrandom two-liquid (NRTL) for predicting multicomponent VLE systems were investigated. Third, the abilities of recently modified NRTL model (mNRTL2 and mNRTL1) to characterize liquid-liquid equilibria (LLE) phase conditions and attributes, including phase stability, miscibility, and consolute point coordinates, were assessed. The results of this work indicate that the ϕ-ϕ approach represents the binary VLE systems considered within three times the error of the gamma-ϕ approach. A similar trend was observed for the for the generalized model predictions using quantitative structure-property parameter generalizations (QSPR). For ternary systems, where all three constituent binary systems were available, the NRTL-QSPR, UNIQUAC-QSPR, and UNIFAC-6 models produce comparable accuracy. For systems where at least one constituent binary is missing, the UNIFAC-6 model produces larger errors than the QSPR generalized models. In general, the LLE characterization results indicate the accuracy of the modified models in reproducing the findings of the original NRTL model.
Karra, Udayarka; Huang, Guoxian; Umaz, Ridvan; Tenaglier, Christopher; Wang, Lei; Li, Baikun
2013-09-01
A novel and robust distributed benthic microbial fuel cell (DBMFC) was developed to address the energy supply issues for oceanographic sensor network applications, especially under scouring and bioturbation by aquatic life. Multi-anode/cathode configuration was employed in the DBMFC system for enhanced robustness and stability in the harsh ocean environment. The results showed that the DBMFC system achieved peak power and current densities of 190mW/m(2) and 125mA/m(2) respectively. Stability characterization tests indicated the DBMFC with multiple anodes achieved higher power generation over the systems with single anode. A computational model that integrated physical, electrochemical and biological factors of MFCs was developed to validate the overall performance of the DBMFC system. The model simulation well corresponded with the experimental results, and confirmed the hypothesis that using a multi anode/cathode MFC configuration results in reliable and robust power generation. Published by Elsevier Ltd.
Dynamical Stability and Long-term Evolution of Rotating Stellar Systems
NASA Astrophysics Data System (ADS)
Varri, Anna L.; Vesperini, E.; McMillan, S. L. W.; Bertin, G.
2011-05-01
We present the first results of an extensive survey of N-body simulations designed to investigate the dynamical stability and the long-term evolution of two new families of self-consistent stellar dynamical models, characterized by the presence of internal rotation. The first family extends the well-known King models to the case of axisymmetric systems flattened by solid-body rotation while the second family is characterized by differential rotation. The equilibrium configurations thus obtained can be described in terms of two dimensionless parameters, which measure the concentration and the amount of rotation, respectively. Slowly rotating configurations are found to be dynamically stable and we followed their long-term evolution, in order to evaluate the interplay between collisional relaxation and angular momentum transport. We also studied the stability of rapidly rotating models, which are characterized by the presence of a toroidal core embedded in an otherwise quasi-spherical configuration. In both cases, a description in terms of the radial and global properties, such as the ratio between the ordered kinetic energy and the gravitational energy of the system, is provided. Because the role of angular momentum in the process of cluster formation is only partly understood, we also undertook a preliminary investigation of the violent relaxation of simple systems initially characterized by approximate solid-body rotation. The properties of the final equilibrium configurations thus obtained are compared with those of the above-described family of differentially rotating models.
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
System identification methods for aircraft flight control development and validation
NASA Technical Reports Server (NTRS)
Tischler, Mark B.
1995-01-01
System-identification methods compose a mathematical model, or series of models, from measurements of inputs and outputs of dynamic systems. The extracted models allow the characterization of the response of the overall aircraft or component subsystem behavior, such as actuators and on-board signal processing algorithms. This paper discusses the use of frequency-domain system-identification methods for the development and integration of aircraft flight-control systems. The extraction and analysis of models of varying complexity from nonparametric frequency-responses to transfer-functions and high-order state-space representations is illustrated using the Comprehensive Identification from FrEquency Responses (CIFER) system-identification facility. Results are presented for test data of numerous flight and simulation programs at the Ames Research Center including rotorcraft, fixed-wing aircraft, advanced short takeoff and vertical landing (ASTOVL), vertical/short takeoff and landing (V/STOL), tiltrotor aircraft, and rotor experiments in the wind tunnel. Excellent system characterization and dynamic response prediction is achieved for this wide class of systems. Examples illustrate the role of system-identification technology in providing an integrated flow of dynamic response data around the entire life-cycle of aircraft development from initial specifications, through simulation and bench testing, and into flight-test optimization.
A Data Model Framework for the Characterization of a Satellite Data Handling Software
NASA Astrophysics Data System (ADS)
Camatto, Gianluigi; Tipaldi, Massimo; Bothmer, Wolfgang; Ferraguto, Massimo; Bruenjes, Bernhard
2014-08-01
This paper describes an approach for the modelling of the characterization and configuration data yielded when developing a Satellite Data Handling Software (DHSW). The model can then be used as an input for the preparation of the logical and physical representation of the Satellite Reference Database (SRDB) contents and related SW suite, an essential product that allows transferring the information between the different system stakeholders, but also to produce part of the DHSW documentation and artefacts. Special attention is given to the shaping of the general Parameter concept, which is shared by a number of different entities within a Space System.
NASA Astrophysics Data System (ADS)
Wang, Baijie; Wang, Xin; Chen, Zhangxin
2013-08-01
Reservoir characterization refers to the process of quantitatively assigning reservoir properties using all available field data. Artificial neural networks (ANN) have recently been introduced to solve reservoir characterization problems dealing with the complex underlying relationships inherent in well log data. Despite the utility of ANNs, the current limitation is that most existing applications simply focus on directly implementing existing ANN models instead of improving/customizing them to fit the specific reservoir characterization tasks at hand. In this paper, we propose a novel intelligent framework that integrates fuzzy ranking (FR) and multilayer perceptron (MLP) neural networks for reservoir characterization. FR can automatically identify a minimum subset of well log data as neural inputs, and the MLP is trained to learn the complex correlations from the selected well log data to a target reservoir property. FR guarantees the selection of the optimal subset of representative data from the overall well log data set for the characterization of a specific reservoir property; and, this implicitly improves the modeling and predication accuracy of the MLP. In addition, a growing number of industrial agencies are implementing geographic information systems (GIS) in field data management; and, we have designed the GFAR solution (GIS-based FR ANN Reservoir characterization solution) system, which integrates the proposed framework into a GIS system that provides an efficient characterization solution. Three separate petroleum wells from southwestern Alberta, Canada, were used in the presented case study of reservoir porosity characterization. Our experiments demonstrate that our method can generate reliable results.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Astrophysics Data System (ADS)
Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Technical Reports Server (NTRS)
Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather
2015-01-01
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Distinguishing the Forest from the Trees: Synthesizing IHRMP Research
Gregory B. Greenwood
1991-01-01
A conceptual model of hardwood rangelands as multi-output resource system is developed and used to achieve a synthesis of Integrated Hardwood Range Management Program (IHRMP) research. The model requires the definition of state variables which characterize the system at any time, processes that move the system to different states, outputs...
This research makes use of in vitro and in vivo approaches to understand and discriminate the compensatory and toxicological responses of the highly regulated HPT system. Development of an initial systems model will be based on the current understanding of the HPT axis and the co...
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Liverani, Chiara; La Manna, Federico; Groenewoud, Arwin; Mercatali, Laura; Van Der Pluijm, Gabri; Pieri, Federica; Cavaliere, Davide; De Vita, Alessandro; Spadazzi, Chiara; Miserocchi, Giacomo; Bongiovanni, Alberto; Recine, Federica; Riva, Nada; Amadori, Dino; Tasciotti, Ennio; Snaar-Jagalska, Ewa; Ibrahim, Toni
2017-02-15
Patient-derived specimens are an invaluable resource to investigate tumor biology. However, in vivo studies on primary cultures are often limited by the small amount of material available, while conventional in vitro systems might alter the features and behavior that characterize cancer cells. We present our data obtained on primary dedifferentiated liposarcoma cells cultured in a 3D scaffold-based system and injected into a zebrafish model. Primary cells were characterized in vitro for their morphological features, sensitivity to drugs and biomarker expression, and in vivo for their engraftment and invasiveness abilities. The 3D culture showed a higher enrichment in cancer cells than the standard monolayer culture and a better preservation of liposarcoma-associated markers. We also successfully grafted primary cells into zebrafish, showing their local migratory and invasive abilities. Our work provides proof of concept of the ability of 3D cultures to maintain the original phenotype of ex vivo cells, and highlights the potential of the zebrafish model to provide a versatile in vivo system for studies with limited biological material. Such models could be used in translational research studies for biomolecular analyses, drug screenings and tumor aggressiveness assays. © 2016. Published by The Company of Biologists Ltd.
A Statistics-Based Material Property Analysis to Support TPS Characterization
NASA Technical Reports Server (NTRS)
Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.
2012-01-01
Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.
Analysis of the packet formation process in packet-switched networks
NASA Astrophysics Data System (ADS)
Meditch, J. S.
Two new queueing system models for the packet formation process in packet-switched telecommunication networks are developed, and their applications in process stability, performance analysis, and optimization studies are illustrated. The first, an M/M/1 queueing system characterization of the process, is a highly aggregated model which is useful for preliminary studies. The second, a marked extension of an earlier M/G/1 model, permits one to investigate stability, performance characteristics, and design of the packet formation process in terms of the details of processor architecture, and hardware and software implementations with processor structure and as many parameters as desired as variables. The two new models together with the earlier M/G/1 characterization span the spectrum of modeling complexity for the packet formation process from basic to advanced.
A year 2003 conceptual model for the U.S. telecommunications infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Roger Gary; Reinert, Rhonda K.
2003-12-01
To model the telecommunications infrastructure and its role and robustness to shocks, we must characterize the business and engineering of telecommunications systems in the year 2003 and beyond. By analogy to environmental systems modeling, we seek to develop a 'conceptual model' for telecommunications. Here, the conceptual model is a list of high-level assumptions consistent with the economic and engineering architectures of telecommunications suppliers and customers, both today and in the near future. We describe the present engineering architectures of the most popular service offerings, and describe the supplier markets in some detail. We also develop a characterization of the customermore » base for telecommunications services and project its likely response to disruptions in service, base-lining such conjectures against observed behaviors during 9/11.« less
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Waterhammer Transient Simulation and Model Anchoring for the Robotic Lunar Lander Propulsion System
NASA Technical Reports Server (NTRS)
Stein, William B.; Trinh, Huu P.; Reynolds, Michael E.; Sharp, David J.
2011-01-01
Waterhammer transients have the potential to adversely impact propulsion system design if not properly addressed. Waterhammer can potentially lead to system plumbing, and component damage. Multi-thruster propulsion systems also develop constructive/destructive wave interference which becomes difficult to predict without detailed models. Therefore, it is important to sufficiently characterize propulsion system waterhammer in order to develop a robust design with minimal impact to other systems. A risk reduction activity was performed at Marshall Space Flight Center to develop a tool for estimating waterhammer through the use of anchored simulation for the Robotic Lunar Lander (RLL) propulsion system design. Testing was performed to simulate waterhammer surges due to rapid valve closure and consisted of twenty-two series of waterhammer tests, resulting in more than 300 valve actuations. These tests were performed using different valve actuation schemes and three system pressures. Data from the valve characterization tests were used to anchor the models that employed MSCSoftware.EASY5 v.2010 to model transient fluid phenomena by using transient forms of mass and energy conservation. The anchoring process was performed by comparing initial model results to experimental data and then iterating the model input to match the simulation results with the experimental data. The models provide good correlation with experimental results, supporting the use of EASY5 as a tool to model fluid transients and provide a baseline for future RLL system modeling. This paper addresses tasks performed during the waterhammer risk reduction activity for the RLL propulsion system. The problem of waterhammer simulation anchoring as applied to the RLL system is discussed with results from the corresponding experimental valve tests. Important factors for waterhammer mitigation are discussed along with potential design impacts to the RLL propulsion system.
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Guangdong; Turchi, Craig
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
Solar Field Optical Characterization at Stillwater Geothermal/Solar Hybrid Plant
Zhu, Guangdong; Turchi, Craig
2017-01-27
Concentrating solar power (CSP) can provide additional thermal energy to boost geothermal plant power generation. For a newly constructed solar field at a geothermal power plant site, it is critical to properly characterize its performance so that the prediction of thermal power generation can be derived to develop an optimum operating strategy for a hybrid system. In the past, laboratory characterization of a solar collector has often extended into the solar field performance model and has been used to predict the actual solar field performance, disregarding realistic impacting factors. In this work, an extensive measurement on mirror slope error andmore » receiver position error has been performed in the field by using the optical characterization tool called Distant Observer (DO). Combining a solar reflectance sampling procedure, a newly developed solar characterization program called FirstOPTIC and public software for annual performance modeling called System Advisor Model (SAM), a comprehensive solar field optical characterization has been conducted, thus allowing for an informed prediction of solar field annual performance. The paper illustrates this detailed solar field optical characterization procedure and demonstrates how the results help to quantify an appropriate tracking-correction strategy to improve solar field performance. In particular, it is found that an appropriate tracking-offset algorithm can improve the solar field performance by about 15%. The work here provides a valuable reference for the growing CSP industry.« less
Flexible Material Systems Testing
NASA Technical Reports Server (NTRS)
Lin, John K.; Shook, Lauren S.; Ware, Joanne S.; Welch, Joseph V.
2010-01-01
An experimental program has been undertaken to better characterize the stress-strain characteristics of flexible material systems to support a NASA ground test program for inflatable decelerator material technology. A goal of the current study is to investigate experimental methods for the characterization of coated woven material stiffness. This type of experimental mechanics data would eventually be used to define the material inputs of fluid-structure interaction simulation models. The test methodologies chosen for this stress-strain characterization are presented along with the experimental results.
Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J
2016-01-01
Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Measurement methods to build up the digital optical twin
NASA Astrophysics Data System (ADS)
Prochnau, Marcel; Holzbrink, Michael; Wang, Wenxin; Holters, Martin; Stollenwerk, Jochen; Loosen, Peter
2018-02-01
The realization of the Digital Optical Twin (DOT), which is in short the digital representation of the physical state of an optical system, is particularly useful in the context of an automated assembly process of optical systems. During the assembly process, the physical system status of the optical system is continuously measured and compared with the digital model. In case of deviations between physical state and the digital model, the latter one is adapted to match the physical state. To reach the goal described above, in a first step measurement/characterization technologies concerning their suitability to generate a precise digital twin of an existing optical system have to be identified and evaluated. This paper gives an overview of possible characterization methods and, finally, shows first results of evaluated, compared methods (e.g. spot-radius, MTF, Zernike-polynomials), to create a DOT. The focus initially lies on the unequivocalness of the optimization results as well as on the computational time required for the optimization to reach the characterized system state. Possible sources of error are the measurement accuracy (to characterize the system) , execution time of the measurement, time needed to map the digital to the physical world (optimization step) as well as interface possibilities to integrate the measurement tool into an assembly cell. Moreover, it is to be discussed whether the used measurement methods are suitable for a `seamless' integration into an assembly cell.
2016-09-01
1 II. MODEL DESIGN ...Figure 10. Experimental Optical Layout for the Boston DM Characterization ..........13 Figure 11. Side View Showing the Curved Surface on a DM...of different methods for deposition, patterning, and etching until the desired design of the device is achieved. While a large number of devices
The Outer Solar System Origin Survey full data release orbit catalog and characterization.
NASA Astrophysics Data System (ADS)
Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Gwyn, Stephen; Alexandersen, Mike; Chen, Ying-Tung; Volk, Kathryn; OSSOS Collaboration.
2017-10-01
The Outer Solar System Origin Survey (OSSOS) completed main data acquisition in February 2017. Here we report the release of our full orbit sample, which include 836 TNOs with high precision orbit determination and classification. We combine the OSSOS orbit sample with previously release Canada-France Ecliptic Plane Survey (CFEPS) and a precursor survey to OSSOS by Alexandersen et al. to provide a sample of over 1100 TNO orbits with high precision classified orbits and precisely determined discovery and tracking circumstances (characterization). We are releasing the full sample and characterization to the world community, along with software for conducting ‘Survey Simulations’, so that this sample of orbits can be used to test models of the formation of our outer solar system against the observed sample. Here I will present the characteristics of the data set and present a parametric model for the structure of the classical Kuiper belt.
MEMS based Doppler velocity measurement system
NASA Astrophysics Data System (ADS)
Shin, Minchul
The design, fabrication, modeling and characterization of a capacitive micromachined ultrasonic transducer (cMUT) based in-air Doppler velocity measurement system using a 1 cm2 planar array are described. Continuous wave operation in a narrowband was chosen in order to maximize range, as it allows for better rejection of broadband noise. The sensor array has a 160-185 kHz resonant frequency to achieve a 10 degree beamwidth. A model for the cMUT and the acoustic system which includes electrical, mechanical, and acoustic components is provided. Furthermore, characterization of the cMUT sensor with a variety of testing procedures is provided. Laser Doppler vibrometry (LDV), beampattern, reflection, and velocity testing characterize the performance of the sensors. The sensor is capable of measuring the velocity of a moving specular reflector with a resolution of 5 cm/s, an update rate of 0.016 second, and a range of 1.5 m.
A method for detecting and characterizing outbreaks of infectious disease from clinical reports.
Cooper, Gregory F; Villamarin, Ricardo; Rich Tsui, Fu-Chiang; Millett, Nicholas; Espino, Jeremy U; Wagner, Michael M
2015-02-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. Copyright © 2014 Elsevier Inc. All rights reserved.
A Method for Detecting and Characterizing Outbreaks of Infectious Disease from Clinical Reports
Cooper, Gregory F.; Villamarin, Ricardo; Tsui, Fu-Chiang (Rich); Millett, Nicholas; Espino, Jeremy U.; Wagner, Michael M.
2014-01-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. PMID:25181466
Spatio-temporal correlations in models of collective motion ruled by different dynamical laws.
Cavagna, Andrea; Conti, Daniele; Giardina, Irene; Grigera, Tomas S; Melillo, Stefania; Viale, Massimiliano
2016-11-15
Information transfer is an essential factor in determining the robustness of biological systems with distributed control. The most direct way to study the mechanisms ruling information transfer is to experimentally observe the propagation across the system of a signal triggered by some perturbation. However, this method may be inefficient for experiments in the field, as the possibilities to perturb the system are limited and empirical observations must rely on natural events. An alternative approach is to use spatio-temporal correlations to probe the information transfer mechanism directly from the spontaneous fluctuations of the system, without the need to have an actual propagating signal on record. Here we test this method on models of collective behaviour in their deeply ordered phase by using ground truth data provided by numerical simulations in three dimensions. We compare two models characterized by very different dynamical equations and information transfer mechanisms: the classic Vicsek model, describing an overdamped noninertial dynamics and the inertial spin model, characterized by an underdamped inertial dynamics. By using dynamic finite-size scaling, we show that spatio-temporal correlations are able to distinguish unambiguously the diffusive information transfer mechanism of the Vicsek model from the linear mechanism of the inertial spin model.
An Update on the Conceptual-Production Systems Model of Apraxia: Evidence from Stroke
ERIC Educational Resources Information Center
Stamenova, Vessela; Black, Sandra E.; Roy, Eric A.
2012-01-01
Limb apraxia is a neurological disorder characterized by an inability to pantomime and/or imitate gestures. It is more commonly observed after left hemisphere damage (LHD), but has also been reported after right hemisphere damage (RHD). The Conceptual-Production Systems model (Roy, 1996) suggests that three systems are involved in the control of…
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
Simulator of Space Communication Networks
NASA Technical Reports Server (NTRS)
Clare, Loren; Jennings, Esther; Gao, Jay; Segui, John; Kwong, Winston
2005-01-01
Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) is a suite of software tools that simulates the behaviors of communication networks to be used in space exploration, and predict the performance of established and emerging space communication protocols and services. MACHETE consists of four general software systems: (1) a system for kinematic modeling of planetary and spacecraft motions; (2) a system for characterizing the engineering impact on the bandwidth and reliability of deep-space and in-situ communication links; (3) a system for generating traffic loads and modeling of protocol behaviors and state machines; and (4) a system of user-interface for performance metric visualizations. The kinematic-modeling system makes it possible to characterize space link connectivity effects, including occultations and signal losses arising from dynamic slant-range changes and antenna radiation patterns. The link-engineering system also accounts for antenna radiation patterns and other phenomena, including modulations, data rates, coding, noise, and multipath fading. The protocol system utilizes information from the kinematic-modeling and link-engineering systems to simulate operational scenarios of space missions and evaluate overall network performance. In addition, a Communications Effect Server (CES) interface for MACHETE has been developed to facilitate hybrid simulation of space communication networks with actual flight/ground software/hardware embedded in the overall system.
Development, characterization, and modeling of a tunable filter camera
NASA Astrophysics Data System (ADS)
Sartor, Mark Alan
1999-10-01
This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide background for the design requirements for the TFC development, the mission and principles of operation behind the multi-channel system will be reviewed. Given the combination of the flexibility, simplicity, and sensitivity, the TFC and its multiple-channel extension can play a significant role in the next generation of remote-sensing instruments.
The Impact of the Assimilation of AIRS Radiance Measurements on Short-term Weather Forecasts
NASA Technical Reports Server (NTRS)
McCarty, Will; Jedlovec, Gary; Miller, Timothy L.
2009-01-01
Advanced spaceborne instruments have the ability to improve the horizontal and vertical characterization of temperature and water vapor in the atmosphere through the explicit use of hyperspectral thermal infrared radiance measurements. The incorporation of these measurements into a data assimilation system provides a means to continuously characterize a three-dimensional, instantaneous atmospheric state necessary for the time integration of numerical weather forecasts. Measurements from the National Aeronautics and Space Administration (NASA) Atmospheric Infrared Sounder (AIRS) are incorporated into the gridpoint statistical interpolation (GSI) three-dimensional variational (3D-Var) assimilation system to provide improved initial conditions for use in a mesoscale modeling framework mimicking that of the operational North American Mesoscale (NAM) model. The methodologies for the incorporation of the measurements into the system are presented. Though the measurements have been shown to have a positive impact in global modeling systems, the measurements are further constrained in this system as the model top is physically lower than the global systems and there is no ozone characterization in the background state. For a study period, the measurements are shown to have positive impact on both the analysis state as well as subsequently spawned short-term (0-48 hr) forecasts, particularly in forecasted geopotential height and precipitation fields. At 48 hr, height anomaly correlations showed an improvement in forecast skill of 2.3 hours relative to a system without the AIRS measurements. Similarly, the equitable threat and bias scores of precipitation forecasts of 25 mm (6 hr)-1 were shown to be improved by 8% and 7%, respectively.
Characterizing the Physics of Plant Root Gravitropism: A Systems Modeling Approach
1999-01-01
with its root directly downward, the root and stem undergo a gravitropic response. Statoliths (gravity-sensing organelles) within the root cap respond...this study is to model the plant root gravitropic response using classical controls and system identification principles. Specific objectives of this
An embryonic chick (Gallus domesticus) whole-organ pancreas culture system was developed for use as an in vitro model to study cholinergic regulation of exocrine pancreatic function. The culture system was examined for characteristic exocrine function and viability by measuring e...
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
Thermal Remote Anemometer Device
NASA Technical Reports Server (NTRS)
Heyman, Joseph S.; Heath, D. Michele; Winfree, William P.; Miller, William E.; Welch, Christopher S.
1988-01-01
Thermal Remote Anemometer Device developed for remote, noncontacting, passive measurement of thermal properties of sample. Model heated locally by scanning laser beam and cooled by wind in tunnel. Thermal image of model analyzed to deduce pattern of airflow around model. For materials applications, system used for evaluation of thin films and determination of thermal diffusivity and adhesive-layer contact. For medical applications, measures perfusion through skin to characterize blood flow and used to determine viabilities of grafts and to characterize tissues.
Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann
We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less
System Dynamics Modeling for Public Health: Background and Opportunities
Homer, Jack B.; Hirsch, Gary B.
2006-01-01
The systems modeling methodology of system dynamics is well suited to address the dynamic complexity that characterizes many public health issues. The system dynamics approach involves the development of computer simulation models that portray processes of accumulation and feedback and that may be tested systematically to find effective policies for overcoming policy resistance. System dynamics modeling of chronic disease prevention should seek to incorporate all the basic elements of a modern ecological approach, including disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. System dynamics shows promise as a means of modeling multiple interacting diseases and risks, the interaction of delivery systems and diseased populations, and matters of national and state policy. PMID:16449591
NASA Technical Reports Server (NTRS)
1997-01-01
This CP contains the extended abstracts and presentation figures of 36 papers presented at the PPM and Other Propulsion R&T Conference. The focus of the research described in these presentations is on materials and structures technologies that are parts of the various projects within the NASA Aeronautics Propulsion Systems Research and Technology Base Program. These projects include Physics and Process Modeling; Smart, Green Engine; Fast, Quiet Engine; High Temperature Engine Materials Program; and Hybrid Hyperspeed Propulsion. Also presented were research results from the Rotorcraft Systems Program and work supported by the NASA Lewis Director's Discretionary Fund. Authors from NASA Lewis Research Center, industry, and universities conducted research in the following areas: material processing, material characterization, modeling, life, applied life models, design techniques, vibration control, mechanical components, and tribology. Key issues, research accomplishments, and future directions are summarized in this publication.
Open quantum systems, effective Hamiltonians, and device characterization
NASA Astrophysics Data System (ADS)
Duffus, S. N. A.; Dwyer, V. M.; Everitt, M. J.
2017-10-01
High fidelity models, which are able to both support accurate device characterization and correctly account for environmental effects, are crucial to the engineering of scalable quantum technologies. As it ensures positivity of the density matrix, one preferred model of open systems describes the dynamics with a master equation in Lindblad form. In practice, Linblad operators are rarely derived from first principles, and often a particular form of annihilator is assumed. This results in dynamical models that miss those additional terms which must generally be added for the master equation to assume the Lindblad form, together with the other concomitant terms that must be assimilated into an effective Hamiltonian to produce the correct free evolution. In first principles derivations, such additional terms are often canceled (or countered), frequently in a somewhat ad hoc manner, leading to a number of competing models. Whilst the implications of this paper are quite general, to illustrate the point we focus here on an example anharmonic system; specifically that of a superconducting quantum interference device (SQUID) coupled to an Ohmic bath. The resulting master equation implies that the environment has a significant impact on the system's energy; we discuss the prospect of keeping or canceling this impact and note that, for the SQUID, monitoring the magnetic susceptibility under control of the capacitive coupling strength and the externally applied flux results in experimentally measurable differences between a number of these models. In particular, one should be able to determine whether a squeezing term of the form X ̂P ̂+P ̂X ̂ should be present in the effective Hamiltonian or not. If model generation is not performed correctly, device characterization will be prone to systemic errors.
Primary human hepatocyte cultures are useful in vitro model systems of human liver because when cultured under appropriate conditions the hepatocytes retain liver-like functionality such as metabolism, transport, and cell signaling. This model system was used to characterize the ...
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.
Predicting coexistence of plants subject to a tolerance-competition trade-off.
Haegeman, Bart; Sari, Tewfik; Etienne, Rampal S
2014-06-01
Ecological trade-offs between species are often invoked to explain species coexistence in ecological communities. However, few mathematical models have been proposed for which coexistence conditions can be characterized explicitly in terms of a trade-off. Here we present a model of a plant community which allows such a characterization. In the model plant species compete for sites where each site has a fixed stress condition. Species differ both in stress tolerance and competitive ability. Stress tolerance is quantified as the fraction of sites with stress conditions low enough to allow establishment. Competitive ability is quantified as the propensity to win the competition for empty sites. We derive the deterministic, discrete-time dynamical system for the species abundances. We prove the conditions under which plant species can coexist in a stable equilibrium. We show that the coexistence conditions can be characterized graphically, clearly illustrating the trade-off between stress tolerance and competitive ability. We compare our model with a recently proposed, continuous-time dynamical system for a tolerance-fecundity trade-off in plant communities, and we show that this model is a special case of the continuous-time version of our model.
Wood, Graham
2014-10-01
The inherence heuristic is characterized as part of an instantiation of a more general model that describes the interaction between undeveloped intuitions, produced by System 1 heuristics, and developed beliefs, constructed by System 2 reasoning. The general model is described and illustrated by examining another instantiation of the process that constructs belief in objective moral value.
Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
DePaolo, D. J.; Orr, F. M.; Benson, S. M.
2007-06-01
To identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems.
2015-09-30
changes in near-shore water columns and support companion laser imaging system tests. The physical, biological and optical oceanographic data...developed under this project will be used as input to optical and environmental models to assess the performance characteristics of laser imaging systems...OBJECTIVES We proposed to characterize the physical, biological and optical fields present during deployments of the Streak Tube Imaging Lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernest A. Mancini
The University of Alabama in cooperation with Texas A&M University, McGill University, Longleaf Energy Group, Strago Petroleum Corporation, and Paramount Petroleum Company are undertaking an integrated, interdisciplinary geoscientific and engineering research project. The project is designed to characterize and model reservoir architecture, pore systems and rock-fluid interactions at the pore to field scale in Upper Jurassic Smackover reef and carbonate shoal reservoirs associated with varying degrees of relief on pre-Mesozoic basement paleohighs in the northeastern Gulf of Mexico. The project effort includes the prediction of fluid flow in carbonate reservoirs through reservoir simulation modeling that utilizes geologic reservoir characterization andmore » modeling and the prediction of carbonate reservoir architecture, heterogeneity and quality through seismic imaging. The primary objective of the project is to increase the profitability, producibility and efficiency of recovery of oil from existing and undiscovered Upper Jurassic fields characterized by reef and carbonate shoals associated with pre-Mesozoic basement paleohighs. The principal research effort for Year 3 of the project has been reservoir characterization, 3-D modeling, testing of the geologic-engineering model, and technology transfer. This effort has included six tasks: (1) the study of seismic attributes, (2) petrophysical characterization, (3) data integration, (4) the building of the geologic-engineering model, (5) the testing of the geologic-engineering model and (6) technology transfer. This work was scheduled for completion in Year 3. Progress on the project is as follows: geoscientific reservoir characterization is completed. The architecture, porosity types and heterogeneity of the reef and shoal reservoirs at Appleton and Vocation Fields have been characterized using geological and geophysical data. The study of rock-fluid interactions has been completed. Observations regarding the diagenetic processes influencing pore system development and heterogeneity in these reef and shoal reservoirs have been made. Petrophysical and engineering property characterization has been completed. Porosity and permeability data at Appleton and Vocation Fields have been analyzed, and well performance analysis has been conducted. Data integration is up to date, in that, the geological, geophysical, petrophysical and engineering data collected to date for Appleton and Vocation Fields have been compiled into a fieldwide digital database. 3-D geologic modeling of the structures and reservoirs at Appleton and Vocation Fields has been completed. The models represent an integration of geological, petrophysical and seismic data. 3-D reservoir simulation of the reservoirs at Appleton and Vocation Fields has been completed. The 3-D geologic models served as the framework for the simulations. The geologic-engineering models of the Appleton and Vocation Field reservoirs have been developed. These models are being tested. The geophysical interpretation for the paleotopographic feature being tested has been made, and the study of the data resulting from drilling of a well on this paleohigh is in progress. Numerous presentations on reservoir characterization and modeling at Appleton and Vocation Fields have been made at professional meetings and conferences and a short course on microbial reservoir characterization and modeling based on these fields has been prepared.« less
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
NASA Technical Reports Server (NTRS)
Nance, Donald K.; Liever, Peter A.
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing
NASA Technical Reports Server (NTRS)
Nance, Donald; Liever, Peter; Nielsen, Tanner
2015-01-01
The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.
Urbain, Jay
2015-12-01
We present the design, and analyze the performance of a multi-stage natural language processing system employing named entity recognition, Bayesian statistics, and rule logic to identify and characterize heart disease risk factor events in diabetic patients over time. The system was originally developed for the 2014 i2b2 Challenges in Natural Language in Clinical Data. The system's strengths included a high level of accuracy for identifying named entities associated with heart disease risk factor events. The system's primary weakness was due to inaccuracies when characterizing the attributes of some events. For example, determining the relative time of an event with respect to the record date, whether an event is attributable to the patient's history or the patient's family history, and differentiating between current and prior smoking status. We believe these inaccuracies were due in large part to the lack of an effective approach for integrating context into our event detection model. To address these inaccuracies, we explore the addition of a distributional semantic model for characterizing contextual evidence of heart disease risk factor events. Using this semantic model, we raise our initial 2014 i2b2 Challenges in Natural Language of Clinical data F1 score of 0.838 to 0.890 and increased precision by 10.3% without use of any lexicons that might bias our results. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Shan; Lin, Ruokuang; Bian, Chunhua; Ma, Qianli D. Y.
2016-01-01
Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language. PMID:28006026
Li, Shan; Lin, Ruokuang; Bian, Chunhua; Ma, Qianli D Y; Ivanov, Plamen Ch
2016-01-01
Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language.
An overview of the model integration process: From pre ...
Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for building integrated modeling systems. We identified five different phases to characterize integration process: pre-integration assessment, preparation of models for integration, orchestration of models during simulation, data interoperability, and testing. Commonly, there is little reuse of existing frameworks beyond the development teams and not much sharing of science components across frameworks. We believe this must change to enable researchers and assessors to form complex workflows that leverage the current environmental science available. In this paper, we characterize the model integration process and compare integration practices of different groups. We highlight key strategies, features, standards, and practices that can be employed by developers to increase reuse and interoperability of science software components and systems. The paper provides a review of the literature regarding techniques and methods employed by various modeling system developers to facilitate science software interoperability. The intent of the paper is to illustrate the wide variation in methods and the limiting effect the variation has on inter-framework reuse and interoperability. A series of recommendation
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Methods and systems for detecting abnormal digital traffic
Goranson, Craig A [Kennewick, WA; Burnette, John R [Kennewick, WA
2011-03-22
Aspects of the present invention encompass methods and systems for detecting abnormal digital traffic by assigning characterizations of network behaviors according to knowledge nodes and calculating a confidence value based on the characterizations from at least one knowledge node and on weighting factors associated with the knowledge nodes. The knowledge nodes include a characterization model based on prior network information. At least one of the knowledge nodes should not be based on fixed thresholds or signatures. The confidence value includes a quantification of the degree of confidence that the network behaviors constitute abnormal network traffic.
Hydrogeologic characterization of an arid zone Radioactive Waste Management Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginanni, J.M.; O`Neill, L.J.; Hammermeister, D.P.
1994-06-01
An in-depth subsurface site characterization and monitoring program for the soil water migration pathway has been planned, implemented, and completed to satisfy data requirements for a waiver from groundwater monitoring, for an exemption from liner leachate collections systems, and for different regulatory driven performance assessments. A traditional scientific approach has been taken to focus characterization and monitoring efforts. This involved developing a conceptual model of the hydrogeologic system and defining and testing hypotheses about this model. Specific hypotheses tested included: that the system was hydrologically heterogenous and anisotropic, and that recharge was very low or negligible. Mineralogical, physical, and hydrologicmore » data collected to test hypotheses has shown the hydrologic system to be remarkably homogenous and isotropic rather than heterogenous and anisotropic. Both hydrodynamic and environmental tracer approaches for estimating recharge have led to the conclusion that recharge from the Area 5 RWMS is not occurring in the upper region of the vadose zone, and that recharge at depth is extremely small or negligible. This demonstration of ``no migration of hazardous constituents to the water table satisfies a key requirement for both the groundwater monitoring waiver and the exemption from liner leachate collection systems. Data obtained from testing hypotheses concerning the soil water migration pathway have been used to refine the conceptual model of the hydrogeologic system of the site. These data suggest that the soil gas and atmospheric air pathways may be more important for transporting contaminants to the accessible environment than the soil water pathway. New hypotheses have been developed about these pathways, and characterization and monitoring activities designed to collect data to test these hypotheses.« less
Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam
2015-01-01
The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernest A. Mancini
The University of Alabama in cooperation with Texas A&M University, McGill University, Longleaf Energy Group, Strago Petroleum Corporation, and Paramount Petroleum Company are undertaking an integrated, interdisciplinary geoscientific and engineering research project. The project is designed to characterize and model reservoir architecture, pore systems and rock-fluid interactions at the pore to field scale in Upper Jurassic Smackover reef and carbonate shoal reservoirs associated with varying degrees of relief on pre-Mesozoic basement paleohighs in the northeastern Gulf of Mexico. The project effort includes the prediction of fluid flow in carbonate reservoirs through reservoir simulation modeling which utilizes geologic reservoir characterization andmore » modeling and the prediction of carbonate reservoir architecture, heterogeneity and quality through seismic imaging. The primary objective of the project is to increase the profitability, producibility and efficiency of recovery of oil from existing and undiscovered Upper Jurassic fields characterized by reef and carbonate shoals associated with pre-Mesozoic basement paleohighs. The principal research effort for Year 2 of the project has been reservoir characterization, 3-D modeling and technology transfer. This effort has included six tasks: (1) the study of rockfluid interactions, (2) petrophysical and engineering characterization, (3) data integration, (4) 3-D geologic modeling, (5) 3-D reservoir simulation and (6) technology transfer. This work was scheduled for completion in Year 2. Overall, the project work is on schedule. Geoscientific reservoir characterization is essentially completed. The architecture, porosity types and heterogeneity of the reef and shoal reservoirs at Appleton and Vocation Fields have been characterized using geological and geophysical data. The study of rock-fluid interactions is near completion. Observations regarding the diagenetic processes influencing pore system development and heterogeneity in these reef and shoal reservoirs have been made. Petrophysical and engineering property characterization has been essentially completed. Porosity and permeability data at Appleton and Vocation Fields have been analyzed, and well performance analysis has been conducted. Data integration is up to date, in that, the geological, geophysical, petrophysical and engineering data collected to date for Appleton and Vocation Fields have been compiled into a fieldwide digital database. 3-D geologic modeling of the structures and reservoirs at Appleton and Vocation Fields has been completed. The model represents an integration of geological, petrophysical and seismic data. 3-D reservoir simulation of the reservoirs at Appleton and Vocation Fields has been completed. The 3-D geologic model served as the framework for the simulations. A technology workshop on reservoir characterization and modeling at Appleton and Vocation Fields was conducted to transfer the results of the project to the petroleum industry.« less
Measurement-based reliability prediction methodology. M.S. Thesis
NASA Technical Reports Server (NTRS)
Linn, Linda Shen
1991-01-01
In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.
Logic Modeling in Quantitative Systems Pharmacology
Traynard, Pauline; Tobalina, Luis; Eduati, Federica; Calzone, Laurence
2017-01-01
Here we present logic modeling as an approach to understand deregulation of signal transduction in disease and to characterize a drug's mode of action. We discuss how to build a logic model from the literature and experimental data and how to analyze the resulting model to obtain insights of relevance for systems pharmacology. Our workflow uses the free tools OmniPath (network reconstruction from the literature), CellNOpt (model fit to experimental data), MaBoSS (model analysis), and Cytoscape (visualization). PMID:28681552
Modeling and characterization of partially inserted electrical connector faults
NASA Astrophysics Data System (ADS)
Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.
2016-03-01
Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.
2011-10-25
range, neither the D-B nor the IPL model could be used to characterize the size and shape of all PANI-0.5-CSA (polyaniline camphor sulfonic acid doped...be used to characterize the size and shape of all PANI-0.5-CSA (polyaniline camphor sulfonic acid doped polymer)/polyimide blend systems. At 1 and 2
Characterization of natural ventilation in wastewater collection systems.
Ward, Matthew; Corsi, Richard; Morton, Robert; Knapp, Tom; Apgar, Dirk; Quigley, Chris; Easter, Chris; Witherspoon, Jay; Pramanik, Amit; Parker, Wayne
2011-03-01
The purpose of the study was to characterize natural ventilation in full-scale gravity collection system components while measuring other parameters related to ventilation. Experiments were completed at four different locations in the wastewater collection systems of Los Angeles County Sanitation Districts, Los Angeles, California, and the King County Wastewater Treatment District, Seattle, Washington. The subject components were concrete gravity pipes ranging in diameter from 0.8 to 2.4 m (33 to 96 in.). Air velocity was measured in each pipe using a carbon-monoxide pulse tracer method. Air velocity was measured entering or exiting the components at vents using a standpipe and hotwire anemometer arrangement. Ambient wind speed, temperature, and relative humidity; headspace temperature and relative humidity; and wastewater flow and temperature were measured. The field experiments resulted in a large database of measured ventilation and related parameters characterizing ventilation in full-scale gravity sewers. Measured ventilation rates ranged from 23 to 840 L/s. The experimental data was used to evaluate existing ventilation models. Three models that were based upon empirical extrapolation, computational fluid dynamics, and thermodynamics, respectively, were evaluated based on predictive accuracy compared to the measured data. Strengths and weaknesses in each model were found and these observations were used to propose a concept for an improved ventilation model.
Volume-based characterization of postocclusion surge.
Zacharias, Jaime; Zacharias, Sergio
2005-10-01
To propose an alternative method to characterize postocclusion surge using a collapsible artificial anterior chamber to replace the currently used rigid anterior chamber model. Fundación Oftamológica Los Andes, Santiago, Chile. The distal end of a phacoemulsification handpiece was placed inside a compliant artificial anterior chamber. Digital recordings of chamber pressure, chamber volume, inflow, and outflow were performed during occlusion break of the phacoemulsification tip. The occlusion break profile of 2 different consoles was compared. Occlusion break while using a rigid anterior chamber model produced a simultaneous increase of chamber inflow and outflow. In the rigid chamber model, pressure decreased sharply, reaching negative pressure values. Alternatively, with the collapsible chamber model, a delay was observed in the inflow that occurs to compensate the outflow surge. Also, the chamber pressure drop was smaller in magnitude, never undershooting below atmospheric pressure into negative values. Using 500 mm Hg as vacuum limit, the Infiniti System (Alcon) performed better that the Legacy (Alcon), showing an 18% reduction in peak volume variation. The collapsible anterior chamber model provides a more realistic representation of the postocclusion surge events that occur in the real eye during cataract surgery. Peak volume fluctuation (mL), half volume recovery time(s), and volume fluctuation integral value (mL x s) are proposed as realistic indicators to characterize the postocclusion surge performance. These indicators show that the Infiniti System has a better postocclusion surge behavior than the Legacy System.
Dietary Exposure Potential Model
Existing food consumption and contaminant residue databases, typically products of nutrition and regulatory monitoring, contain useful information to characterize dietary intake of environmental chemicals. A PC-based model with resident database system, termed the Die...
NELasso: Group-Sparse Modeling for Characterizing Relations Among Named Entities in News Articles.
Tariq, Amara; Karim, Asim; Foroosh, Hassan
2017-10-01
Named entities such as people, locations, and organizations play a vital role in characterizing online content. They often reflect information of interest and are frequently used in search queries. Although named entities can be detected reliably from textual content, extracting relations among them is more challenging, yet useful in various applications (e.g., news recommending systems). In this paper, we present a novel model and system for learning semantic relations among named entities from collections of news articles. We model each named entity occurrence with sparse structured logistic regression, and consider the words (predictors) to be grouped based on background semantics. This sparse group LASSO approach forces the weights of word groups that do not influence the prediction towards zero. The resulting sparse structure is utilized for defining the type and strength of relations. Our unsupervised system yields a named entities' network where each relation is typed, quantified, and characterized in context. These relations are the key to understanding news material over time and customizing newsfeeds for readers. Extensive evaluation of our system on articles from TIME magazine and BBC News shows that the learned relations correlate with static semantic relatedness measures like WLM, and capture the evolving relationships among named entities over time.
The Department of the Navy Systems Engineering Career Competency Model (SECCM)
2015-05-13
Respond 71% Value 18% Organize 3% Characterize 4% Affective Domain Total KSAs : 869 ENG Career Field Competency Model 10 1.0 Mission Level...The Department of the Navy Systems Engineering Career Competency Model (SECCM) 2015 Acquisition Symposium Naval Postgraduate School Monterey...Career Competency Model (SECCM) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER
NASA Technical Reports Server (NTRS)
Matolak, David W.
2007-01-01
In this project final report, entitled "Wireless Channel Characterization in the 5 GHz Microwave Landing System Extension Band for Airport Surface Areas," we provide a detailed description and model representation for the wireless channel in the airport surface environment in this band. In this executive summary, we review report contents, describe the achieved objectives and major findings, and highlight significant conclusions and recommendations.
Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Patterson-Hine, Ann
2003-01-01
Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.
2011-11-01
assessment to quality of localization/characterization estimates. This protocol includes four critical components: (1) a procedure to identify the...critical factors impacting SHM system performance; (2) a multistage or hierarchical approach to SHM system validation; (3) a model -assisted evaluation...Lindgren, E. A ., Buynak, C. F., Steffes, G., Derriso, M., “ Model -assisted Probabilistic Reliability Assessment for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António
2017-05-01
In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).
Small Fish Species as Powerful Model Systems to Study Vertebrate Physiology in Space
NASA Astrophysics Data System (ADS)
Muller, M.; Aceto, J.; Dalcq, J.; Alestrom, P.; Nourizadeh-Lillabadi, R.; Goerlich, R.; Schiller, V.; Winkler, C.; Renn, J.; Eberius, M.; Slenzka, K.
2008-06-01
Small fish models, mainly zebrafish (Danio rerio) and medaka (Oryzias latipes), have been used for many years as powerful model systems for vertebrate developmental biology. Moreover, these species are increasingly recognized as valuable systems to study vertebrate physiology, pathology, pharmacology and toxicology, including in particular bone physiology. The biology of small fishes presents many advantages, such as transparency of the embryos, external and rapid development, small size and easy reproduction. Further characteristics are particularly useful for space research or for large scale screening approaches. Finally, many technologies for easily characterizing bones are available. Our objective is to investigate the changes induced by microgravity in small fish. By combining whole genome analysis (microarray, DNA methylation, chromatin modification) with live imaging of selected genes in transgenic animals, a comprehensive and integrated characterization of physiological changes in space could be gained, especially concerning bone physiology.
Sinchenko, Elena; Gibbs, W E Keith; Davis, Claire E; Stoddart, Paul R
2010-11-20
A distributed optical-fiber sensing system based on pulsed excitation and time-gated photon counting has been used to locate a fluorescent region along the fiber. The complex Alq3 and the infrared dye IR-125 were examined with 405 and 780 nm excitation, respectively. A model to characterize the response of the distributed fluorescence sensor to a Gaussian input pulse was developed and tested. Analysis of the Alq3 fluorescent response confirmed the validity of the model and enabled the fluorescence lifetime to be determined. The intrinsic lifetime obtained (18.2±0.9 ns) is in good agreement with published data. The decay rate was found to be proportional to concentration, which is indicative of collisional deactivation. The model allows the spatial resolution of a distributed sensing system to be improved for fluorophores with lifetimes that are longer than the resolution of the sensing system.
The role of fractional calculus in modeling biological phenomena: A review
NASA Astrophysics Data System (ADS)
Ionescu, C.; Lopes, A.; Copot, D.; Machado, J. A. T.; Bates, J. H. T.
2017-10-01
This review provides the latest developments and trends in the application of fractional calculus (FC) in biomedicine and biology. Nature has often showed to follow rather simple rules that lead to the emergence of complex phenomena as a result. Of these, the paper addresses the properties in respiratory lung tissue, whose natural solutions arise from the midst of FC in the form of non-integer differ-integral solutions and non-integer parametric models. Diffusion of substances in human body, e.g. drug diffusion, is also a phenomena well known to be captured with such mathematical models. FC has been employed in neuroscience to characterize the generation of action potentials and spiking patters but also in characterizing bio-systems (e.g. vegetable tissues). Despite the natural complexity, biological systems belong as well to this class of systems, where FC has offered parsimonious yet accurate models. This review paper is a collection of results and literature reports who are essential to any versed engineer with multidisciplinary applications and bio-medical in particular.
2014-03-31
BPMN ). This is when the MITRE Acquisition Guidance Model (AGM) model effort was...developed using the iGrafx6 tool with BPMN [12]. The AGM provides a high-‐level characterization of...the activities, events and messages using a BPMN notation as shown in Figure 14. It
NASA Astrophysics Data System (ADS)
Lesanovsky, Igor; van Horssen, Merlijn; Guţă, Mădălin; Garrahan, Juan P.
2013-04-01
We describe how to characterize dynamical phase transitions in open quantum systems from a purely dynamical perspective, namely, through the statistical behavior of quantum jump trajectories. This approach goes beyond considering only properties of the steady state. While in small quantum systems dynamical transitions can only occur trivially at limiting values of the controlling parameters, in many-body systems they arise as collective phenomena and within this perspective they are reminiscent of thermodynamic phase transitions. We illustrate this in open models of increasing complexity: a three-level system, the micromaser, and a dissipative version of the quantum Ising model. In these examples dynamical transitions are accompanied by clear changes in static behavior. This is however not always the case, and, in general, dynamical phases need to be uncovered by observables which are strictly dynamical, e.g., dynamical counting fields. We demonstrate this via the example of a class of models of dissipative quantum glasses, whose dynamics can vary widely despite having identical (and trivial) stationary states.
Lesanovsky, Igor; van Horssen, Merlijn; Guţă, Mădălin; Garrahan, Juan P
2013-04-12
We describe how to characterize dynamical phase transitions in open quantum systems from a purely dynamical perspective, namely, through the statistical behavior of quantum jump trajectories. This approach goes beyond considering only properties of the steady state. While in small quantum systems dynamical transitions can only occur trivially at limiting values of the controlling parameters, in many-body systems they arise as collective phenomena and within this perspective they are reminiscent of thermodynamic phase transitions. We illustrate this in open models of increasing complexity: a three-level system, the micromaser, and a dissipative version of the quantum Ising model. In these examples dynamical transitions are accompanied by clear changes in static behavior. This is however not always the case, and, in general, dynamical phases need to be uncovered by observables which are strictly dynamical, e.g., dynamical counting fields. We demonstrate this via the example of a class of models of dissipative quantum glasses, whose dynamics can vary widely despite having identical (and trivial) stationary states.
Study of an intraurban travel demand model incorporating commuter preference variables
NASA Technical Reports Server (NTRS)
Holligan, P. E.; Coote, M. A.; Rushmer, C. R.; Fanning, M. L.
1971-01-01
The model is based on the substantial travel data base for the nine-county San Francisco Bay Area, provided by the Metropolitan Transportation Commission. The model is of the abstract type, and makes use of commuter attitudes towards modes and simple demographic characteristics of zones in a region to predict interzonal travel by mode for the region. A characterization of the STOL/VTOL mode was extrapolated by means of a subjective comparison of its expected characteristics with those of modes characterized by the survey. Predictions of STOL demand were made for the Bay Area and an aircraft network was developed to serve this demand. When this aircraft system is compared to the base case system, the demand for STOL service has increased five fold and the resulting economics show considerable benefit from the increased scale of operations. In the previous study all systems required subsidy in varying amounts. The new system shows a substantial profit at an average fare of $3.55 per trip.
NASA Technical Reports Server (NTRS)
Sulzman, F. W.
1981-01-01
The effects of the Spacelab environment on the circadian rhythms in microorganisms are investigated. Neurospora is chosen because of its well characterized circadian rhythm of growth. Growth rate, banding patterns, and circadian period and phase information are studied.
In Vivo and In Vitro Characterization of a Plasmodium Liver Stage-Specific Promoter
Horstmann, Sebastian; Annoura, Takeshi; del Portillo, Hernando A.; Khan, Shahid M.; Heussler, Volker T.
2015-01-01
Little is known about stage-specific gene regulation in Plasmodium parasites, in particular the liver stage of development. We have previously described in the Plasmodium berghei rodent model, a liver stage-specific (lisp2) gene promoter region, in vitro. Using a dual luminescence system, we now confirm the stage specificity of this promoter region also in vivo. Furthermore, by substitution and deletion analyses we have extended our in vitro characterization of important elements within the promoter region. Importantly, the dual luminescence system allows analyzing promoter constructs avoiding mouse-consuming cloning procedures of transgenic parasites. This makes extensive mutation and deletion studies a reasonable approach also in the malaria mouse model. Stage-specific expression constructs and parasite lines are extremely valuable tools for research on Plasmodium liver stage biology. Such reporter lines offer a promising opportunity for assessment of liver stage drugs, characterization of genetically attenuated parasites and liver stage-specific vaccines both in vivo and in vitro, and may be key for the generation of inducible systems. PMID:25874388
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Foster, Benjamin; Kisner, Roger A
2016-01-01
This paper documents our progress developing an ultrasound phased array system in combination with a model-based iterative reconstruction (MBIR) algorithm to inspect the health of and characterize the composition of the near-wellbore region for geothermal reservoirs. The main goal for this system is to provide a near-wellbore in-situ characterization capability that will significantly improve wellbore integrity evaluation and near well-bore fracture network mapping. A more detailed image of the fracture network near the wellbore in particular will enable the selection of optimal locations for stimulation along the wellbore, provide critical data that can be used to improve stimulation design, andmore » provide a means for measuring evolution of the fracture network to support long term management of reservoir operations. Development of such a measurement capability supports current hydrothermal operations as well as the successful demonstration of Engineered Geothermal Systems (EGS). The paper will include the design of the phased array system, the performance specifications, and characterization methodology. In addition, we will describe the MBIR forward model derived for the phased array system and the propagation of compressional waves through a pseudo-homogenous medium.« less
A discrimination model in waste plastics sorting using NIR hyperspectral imaging system.
Zheng, Yan; Bai, Jiarui; Xu, Jingna; Li, Xiayang; Zhang, Yimin
2018-02-01
Classification of plastics is important in the recycling industry. A plastic identification model in the near infrared spectroscopy wavelength range 1000-2500 nm is proposed for the characterization and sorting of waste plastics using acrylonitrile butadiene styrene (ABS), polystyrene (PS), polypropylene (PP), polyethylene (PE), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The model is built by the feature wavelengths of standard samples applying the principle component analysis (PCA), and the accuracy, property and cross-validation of the model were analyzed. The model just contains a simple equation, center of mass coordinates, and radial distance, with which it is easy to develop classification and sorting software. A hyperspectral imaging system (HIS) with the identification model verified its practical application by using the unknown plastics. Results showed that the identification accuracy of unknown samples is 100%. All results suggested that the discrimination model was potential to an on-line characterization and sorting platform of waste plastics based on HIS. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schweitzer, Ben; Wilke, Stephen; Khateeb, Siddique; Al-Hallaj, Said
2015-08-01
A lumped (0-D) numerical model has been developed for simulating the thermal response of a lithium-ion battery pack with a phase-change composite (PCC™) thermal management system. A small 10s4p battery pack utilizing PCC material was constructed and subjected to discharge at various C-rates in order to validate the lumped model. The 18650 size Li-ion cells used in the pack were electrically characterized to determine their heat generation, and various PCC materials were thermally characterized to determine their apparent specific heat as a function of temperature. Additionally, a 2-D FEA thermal model was constructed to help understand the magnitude of spatial temperature variation in the pack, and to understand the limitations of the lumped model. Overall, good agreement is seen between experimentally measured pack temperatures and the 0-D model, and the 2-D FEA model predicts minimal spatial temperature variation for PCC-based packs at C-rates of 1C and below.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Randazzo, C L; De Luca, S; Todaro, A; Restuccia, C; Lanza, C M; Spagna, G; Caggia, C
2007-08-01
The aim of this work was to preliminary characterize wild lactic acid bacteria (LAB), previously isolated during artisanal Pecorino Siciliano (PS) cheese-making for technological and flavour formation abilities in a model cheese system. Twelve LAB were studied for the ability to grow at 10 and 45 degrees C, to coagulate and acidify both reconstituted skim milk and ewe's milk. Moreover, the capacity of the strains to generate aroma compounds was evaluated in a model cheese system at 30- and 60-day ripening. Flavour compounds were screened by sensory analysis and throughout gas chromatography (GC)-mass spectrometry (MS). Most of the strains were able to grow both at 10 and 45 degrees C and exhibited high ability to acidify and coagulate ewes' milk. Sensory evaluation revealed that the wild strains produced more significant flavour attributes than commercial strains in the 60-day-old model cheese system. GC-MS data confirmed the results of sensory evaluations and showed the ability of wild lactobacilli to generate key volatile compounds. Particularly, three wild lactobacilli strains, belonging to Lactobacillus casei, Lb. rhamnosus and Lb. plantarum species, generated both in 60- and 30-day-old model cheeses system, the 3-methyl butan(al)(ol) compound, which is associated with fruity taste. The present work preliminarily demonstrated that the technological and flavour formation abilities of the wild strains are strain-specific and that wild lactobacilli, which produced key flavour compounds during ripening, could be used as tailor-made starters. This study reports the technological characterization and flavour formation ability of wild LAB strains isolated from artisanal Pecorino cheese and highlights that the catabolic activities were highly strain dependent. Hence, wild lactobacilli could be selected as tailor-made starter cultures for the PS cheese manufacture.
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Anticipatory Cognitive Systems: a Theoretical Model
NASA Astrophysics Data System (ADS)
Terenzi, Graziano
This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.
2012-04-01
A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.
Characterization of In-Body to On-Body Wireless Radio Frequency Link for Upper Limb Prostheses.
Stango, Antonietta; Yazdandoost, Kamya Yekeh; Negro, Francesco; Farina, Dario
2016-01-01
Wireless implanted devices can be used to interface patients with disabilities with the aim of restoring impaired motor functions. Implanted devices that record and transmit electromyographic (EMG) signals have been applied for the control of active prostheses. This simulation study investigates the propagation losses and the absorption rate of a wireless radio frequency link for in-to-on body communication in the medical implant communication service (MICS) frequency band to control myoelectric upper limb prostheses. The implanted antenna is selected and a suitable external antenna is designed. The characterization of both antennas is done by numerical simulations. A heterogeneous 3D body model and a 3D electromagnetic solver have been used to model the path loss and to characterize the specific absorption rate (SAR). The path loss parameters were extracted and the SAR was characterized, verifying the compliance with the guideline limits. The path loss model has been also used for a preliminary link budget analysis to determine the feasibility of such system compliant with the IEEE 802.15.6 standard. The resulting link margin of 11 dB confirms the feasibility of the system proposed.
Characterization of In-Body to On-Body Wireless Radio Frequency Link for Upper Limb Prostheses
Stango, Antonietta; Yazdandoost, Kamya Yekeh; Negro, Francesco; Farina, Dario
2016-01-01
Wireless implanted devices can be used to interface patients with disabilities with the aim of restoring impaired motor functions. Implanted devices that record and transmit electromyographic (EMG) signals have been applied for the control of active prostheses. This simulation study investigates the propagation losses and the absorption rate of a wireless radio frequency link for in-to-on body communication in the medical implant communication service (MICS) frequency band to control myoelectric upper limb prostheses. The implanted antenna is selected and a suitable external antenna is designed. The characterization of both antennas is done by numerical simulations. A heterogeneous 3D body model and a 3D electromagnetic solver have been used to model the path loss and to characterize the specific absorption rate (SAR). The path loss parameters were extracted and the SAR was characterized, verifying the compliance with the guideline limits. The path loss model has been also used for a preliminary link budget analysis to determine the feasibility of such system compliant with the IEEE 802.15.6 standard. The resulting link margin of 11 dB confirms the feasibility of the system proposed. PMID:27764182
NASA Technical Reports Server (NTRS)
Trivedi, K. S. (Editor); Clary, J. B. (Editor)
1980-01-01
A computer aided reliability estimation procedure (CARE 3), developed to model the behavior of ultrareliable systems required by flight-critical avionics and control systems, is evaluated. The mathematical models, numerical method, and fault-tolerant architecture modeling requirements are examined, and the testing and characterization procedures are discussed. Recommendations aimed at enhancing CARE 3 are presented; in particular, the need for a better exposition of the method and the user interface is emphasized.
Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.
Sources of uncertainty involved in exposure reconstruction for a short half-life chemical, carbaryl, were characterized using the Cumulative and Aggregate Risk Evaluation System (CARES), an exposure model, and a human physiologically based pharmacokinetic (PBPK) model. CARES was...
NASA Astrophysics Data System (ADS)
Vergara, H. J.; Kirstetter, P.; Gourley, J. J.; Flamig, Z.; Hong, Y.
2015-12-01
The macro scale patterns of simulated streamflow errors are studied in order to characterize uncertainty in a hydrologic modeling system forced with the Multi-Radar/Multi-Sensor (MRMS; http://mrms.ou.edu) quantitative precipitation estimates for flood forecasting over the Conterminous United States (CONUS). The hydrologic model is centerpiece of the Flooded Locations And Simulated Hydrograph (FLASH; http://flash.ou.edu) real-time system. The hydrologic model is implemented at 1-km/5-min resolution to generate estimates of streamflow. Data from the CONUS-wide stream gauge network of the United States' Geological Survey (USGS) were used as a reference to evaluate the discrepancies with the hydrological model predictions. Streamflow errors were studied at the event scale with particular focus on the peak flow magnitude and timing. A total of 2,680 catchments over CONUS and 75,496 events from a 10-year period are used for the simulation diagnostic analysis. Associations between streamflow errors and geophysical factors were explored and modeled. It is found that hydro-climatic factors and radar coverage could explain significant underestimation of peak flow in regions of complex terrain. Furthermore, the statistical modeling of peak flow errors shows that other geophysical factors such as basin geomorphometry, pedology, and land cover/use could also provide explanatory information. Results from this research demonstrate the utility of uncertainty characterization in providing guidance to improve model adequacy, parameter estimates, and input quality control. Likewise, the characterization of uncertainty enables probabilistic flood forecasting that can be extended to ungauged locations.
NASA Astrophysics Data System (ADS)
Mi, Ye
1998-12-01
The major objective of this thesis is focused on theoretical and experimental investigations of identifying and characterizing vertical and horizontal flow regimes in two-phase flows. A methodology of flow regime identification with impedance-based neural network systems and a comprehensive model of vertical slug flow have been developed. Vertical slug flow has been extensively investigated and characterized with geometric, kinematic and hydrodynamic parameters. A multi-sensor impedance void-meter and a multi-sensor magnetic flowmeter were developed. The impedance void-meter was cross-calibrated with other reliable techniques for void fraction measurements. The performance of the impedance void-meter to measure the void propagation velocity was evaluated by the drift flux model. It was proved that the magnetic flowmeter was applicable to vertical slug flow measurements. Separable signals from these instruments allow us to unearth most characteristics of vertical slug flow. A methodology of vertical flow regime identification was developed. Supervised neural network and self-organizing neural network systems were employed. First, they were trained with results from an idealized simulation of impedance in a two-phase mixture. The simulation was mainly based on Mishima and Ishii's flow regime map, the drift flux model, and the newly developed model of slug flow. Then, these trained systems were tested with impedance signals. The results showed that the neural network systems were appropriate classifiers of vertical flow regimes. The theoretical models and experimental databases used in the simulation were reliable. Furthermore, this approach was applied successfully to horizontal flow identification. A comprehensive model was developed to predict important characteristics of vertical slug flow. It was realized that the void fraction of the liquid slug is determined by the relative liquid motion between the Taylor bubble tail and the Taylor bubble wake. Relying on this understanding and experimental results, a special relationship was built for the void fraction of the liquid slug. The prediction of the void fraction of the liquid slug was considerably improved. Experimental characterization of vertical slug flows was performed extensively with the impedance void-meter and the magnetic flowmeter. The theoretical predictions were compared with the experimental results. The agreements between them are very satisfactory.
Integrated energy balance analysis for Space Station Freedom
NASA Technical Reports Server (NTRS)
Tandler, John
1991-01-01
An integrated simulation model is described which characterizes the dynamic interaction of the energy transport subsystems of Space Station Freedom for given orbital conditions and for a given set of power and thermal loads. Subsystems included in the model are the Electric Power System (EPS), the Internal Thermal Control System (ITCS), the External Thermal Control System (ETCS), and the cabin Temperature and Humidity Control System (THC) (which includes the avionics air cooling, cabin air cooling, and intermodule ventilation systems). Models of the subsystems were developed in a number of system-specific modeling tools and validated. The subsystem models are then combined into integrated models to address a number of integrated performance issues involving the ability of the integrated energy transport system of Space Station Freedom to provide power, controlled cabin temperature and humidity, and equipment thermal control to support operations.
Saavedra-Leos, M Z; Leyva-Porras, C; Martínez-Guerra, E; Pérez-García, S A; Aguilar-Martínez, J A; Álvarez-Salas, C
2014-05-25
In this work two systems based on a carbohydrate polymer were studied: inulin as model system and inulin-orange juice as complex system. Both system were stored at different water activity conditions and subsequently characterized. Water adsorption isotherms type II were fitted by the GAB model and the water monolayer content was determined for each system. From thermal analyzes it was found that at low water activities (aw) systems were fully amorphous. As aw increased, crystallinity was developed. This behavior was corroborated by X-ray diffraction. In the inulin-orange juice system, crystallization appears at lower water activity caused by the intensification of the chemical interaction of the low molecular weight species contained in orange juice. Glass transition temperature (Tg), determined by modulated differential scanning calorimeter, decreased with aw. As water is adsorbed, the physical appearance of samples changed which could be observed by optical microscopy and effectively related with the microstructure found by scanning electron microscopy. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
1991-01-01
and other higher order cognitive processe elevant to the design and use of the system. Characterization of human abilities and limitations in terms of...pilot’s workload and cognitive resources at any given moment. Before flight, the pilot can tailor the mode, type and quantity of information provided...it needs to incorporate models of human cognitive processes and resource limitations into the resource model. As mentioned earlier, characterization of
A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.
Baron, Jonathan; Gürçay, Burcu
2017-05-01
The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.
Development of the Dual Aerodynamic Nozzle Model for the NTF Semi-Span Model Support System
NASA Technical Reports Server (NTRS)
Jones, Greg S.; Milholen, William E., II; Goodliff, Scott L.
2011-01-01
The recent addition of a dual flow air delivery system to the NASA Langley National Transonic Facility was experimentally validated with a Dual Aerodynamic Nozzle semi-span model. This model utilized two Stratford calibration nozzles to characterize the weight flow system of the air delivery system. The weight flow boundaries for the air delivery system were identified at mildly cryogenic conditions to be 0.1 to 23 lbm/sec for the high flow leg and 0.1 to 9 lbm/sec for the low flow leg. Results from this test verified system performance and identified problems with the weight-flow metering system that required the vortex flow meters to be replaced at the end of the test.
An agent-based hydroeconomic model to evaluate water policies in Jordan
NASA Astrophysics Data System (ADS)
Yoon, J.; Gorelick, S.
2014-12-01
Modern water systems can be characterized by a complex network of institutional and private actors that represent competing sectors and interests. Identifying solutions to enhance water security in such systems calls for analysis that can adequately account for this level of complexity and interaction. Our work focuses on the development of a hierarchical, multi-agent, hydroeconomic model that attempts to realistically represent complex interactions between hydrologic and multi-faceted human systems. The model is applied to Jordan, one of the most water-poor countries in the world. In recent years, the water crisis in Jordan has escalated due to an ongoing drought and influx of refugees from regional conflicts. We adopt a modular approach in which biophysical modules simulate natural and engineering phenomena, and human modules represent behavior at multiple scales of decision making. The human modules employ agent-based modeling, in which agents act as autonomous decision makers at the transboundary, state, organizational, and user levels. A systematic nomenclature and conceptual framework is used to characterize model agents and modules. Concepts from the Unified Modeling Language (UML) are adopted to promote clear conceptualization of model classes and process sequencing, establishing a foundation for full deployment of the integrated model in a scalable object-oriented programming environment. Although the framework is applied to the Jordanian water context, it is generalizable to other regional human-natural freshwater supply systems.
Propagation Effects in Space-Based Surveillance Systems
1982-02-01
This report describes the first year’s effort to investigate propagation effects in space - based radars. A model was developed for analyzing the...deleterious systems effects by first developing a generalized aperture distribution that ultimately can be applied to any space - based radar configuration...The propagation effects are characterized in terms of the SATCOM model striation parameters. The form of a generalized channel model for space - based radars
NASA Astrophysics Data System (ADS)
Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin
2017-12-01
This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.
System/observer/controller identification toolbox
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Horta, Lucas G.; Phan, Minh
1992-01-01
System Identification is the process of constructing a mathematical model from input and output data for a system under testing, and characterizing the system uncertainties and measurement noises. The mathematical model structure can take various forms depending upon the intended use. The SYSTEM/OBSERVER/CONTROLLER IDENTIFICATION TOOLBOX (SOCIT) is a collection of functions, written in MATLAB language and expressed in M-files, that implements a variety of modern system identification techniques. For an open loop system, the central features of the SOCIT are functions for identification of a system model and its corresponding forward and backward observers directly from input and output data. The system and observers are represented by a discrete model. The identified model and observers may be used for controller design of linear systems as well as identification of modal parameters such as dampings, frequencies, and mode shapes. For a closed-loop system, an observer and its corresponding controller gain directly from input and output data.
Comparative Pedagogical Studies on Models of Education Systems Management in the EU and Ukraine
ERIC Educational Resources Information Center
Desiatov, Tymofii
2017-01-01
The article highlights the peculiarities of models of education systems management in the EU and Ukraine. It has been proved that effectiveness of the education process is determined by managerial culture, which characterizes a manager's professional image. Special attention has been paid to finding the right balance between centralization and…
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael A.; Wright, Cinnamon; Liounis, Andrew J.; Getzandanner, Kenneth M.; Van Eepoel, John M.; DeWeese, Keith D.
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereo-photoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael; Wright, Cinnamon; Liounis, Andrew; Getzandanner, Kenneth; Van Eepoel, John; Deweese, Keith
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereophotoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
Research in Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1997-01-01
This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.
Tools and techniques for developing policies for complex and uncertain systems.
Bankes, Steven C
2002-05-14
Agent-based models (ABM) are examples of complex adaptive systems, which can be characterized as those systems for which no model less complex than the system itself can accurately predict in detail how the system will behave at future times. Consequently, the standard tools of policy analysis, based as they are on devising policies that perform well on some best estimate model of the system, cannot be reliably used for ABM. This paper argues that policy analysis by using ABM requires an alternative approach to decision theory. The general characteristics of such an approach are described, and examples are provided of its application to policy analysis.
Characterizability of metabolic pathway systems from time series data.
Voit, Eberhard O
2013-12-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. Copyright © 2013 Elsevier Inc. All rights reserved.
Characterizability of Metabolic Pathway Systems from Time Series Data
Voit, Eberhard O.
2013-01-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernest A. Mancini
The University of Alabama in cooperation with Texas A&M University, McGill University, Longleaf Energy Group, Strago Petroleum Corporation, and Paramount Petroleum Company are undertaking an integrated, interdisciplinary geoscientific and engineering research project. The project is designed to characterize and model reservoir architecture, pore systems and rock-fluid interactions at the pore to field scale in Upper Jurassic Smackover reef and carbonate shoal reservoirs associated with varying degrees of relief on pre-Mesozoic basement paleohighs in the northeastern Gulf of Mexico. The project effort includes the prediction of fluid flow in carbonate reservoirs through reservoir simulation modeling which utilizes geologic reservoir characterization andmore » modeling and the prediction of carbonate reservoir architecture, heterogeneity and quality through seismic imaging. The primary objective of the project is to increase the profitability, producibility and efficiency of recovery of oil from existing and undiscovered Upper Jurassic fields characterized by reef and carbonate shoals associated with pre-Mesozoic basement paleohighs. The principal research effort for Year 1 of the project has been reservoir description and characterization. This effort has included four tasks: (1) geoscientific reservoir characterization, (2) the study of rock-fluid interactions, (3) petrophysical and engineering characterization and (4) data integration. This work was scheduled for completion in Year 1. Overall, the project work is on schedule. Geoscientific reservoir characterization is essentially completed. The architecture, porosity types and heterogeneity of the reef and shoal reservoirs at Appleton and Vocation Fields have been characterized using geological and geophysical data. The study of rock-fluid interactions has been initiated. Observations regarding the diagenetic processes influencing pore system development and heterogeneity in these reef and shoal reservoirs have been made. Petrophysical and engineering property characterization is progressing. Data on reservoir production rate and pressure history at Appleton and Vocation Fields have been tabulated, and porosity data from core analysis has been correlated with porosity as observed from well log response. Data integration is on schedule, in that, the geological, geophysical, petrophysical and engineering data collected to date for Appleton and Vocation Fields have been compiled into a fieldwide digital database for reservoir characterization, modeling and simulation for the reef and carbonate shoal reservoirs for each of these fields.« less
Hydrological modeling in forested systems
H.E. Golden; G.R. Evenson; S. Tian; Devendra Amatya; Ge Sun
2015-01-01
Characterizing and quantifying interactions among components of the forest hydrological cycle is complex and usually requires a combination of field monitoring and modelling approaches (Weiler and McDonnell, 2004; National Research Council, 2008). Models are important tools for testing hypotheses, understanding hydrological processes and synthesizing experimental data...
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
A complete categorization of multiscale models of infectious disease systems.
Garira, Winston
2017-12-01
Modelling of infectious disease systems has entered a new era in which disease modellers are increasingly turning to multiscale modelling to extend traditional modelling frameworks into new application areas and to achieve higher levels of detail and accuracy in characterizing infectious disease systems. In this paper we present a categorization framework for categorizing multiscale models of infectious disease systems. The categorization framework consists of five integration frameworks and five criteria. We use the categorization framework to give a complete categorization of host-level immuno-epidemiological models (HL-IEMs). This categorization framework is also shown to be applicable in categorizing other types of multiscale models of infectious diseases beyond HL-IEMs through modifying the initial categorization framework presented in this study. Categorization of multiscale models of infectious disease systems in this way is useful in bringing some order to the discussion on the structure of these multiscale models.
A collaborative molecular modeling environment using a virtual tunneling service.
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
2007-06-30
fractal dimensions and Lyapunov exponents . Fractal dimensions characterize geometri- cal complexity of dynamics (e.g., spatial distribution of points along...ant classi3ers (e.g., Lyapunov exponents , and fractal dimensions). The 3rst three steps show how chaotic systems may be separated from stochastic...correlated random walk in which a ¼ 2H, where H is the Hurst exponen interval 0pHp1 with the case H ¼ 0:5 corresponding to a simple rando This model has been
Resource Characterization | Water Power | NREL
characterization and assessment, NREL has extended its capabilities to the field of water power. NREL's team of , modeling, data analysis, and Geographic Information Systems. Many years of experience in wind assessment have enabled NREL to develop the skills and methodologies to evaluate the development potential of many
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
Statistical distribution of mechanical properties for three graphite-epoxy material systems
NASA Technical Reports Server (NTRS)
Reese, C.; Sorem, J., Jr.
1981-01-01
Graphite-epoxy composites are playing an increasing role as viable alternative materials in structural applications necessitating thorough investigation into the predictability and reproducibility of their material strength properties. This investigation was concerned with tension, compression, and short beam shear coupon testing of large samples from three different material suppliers to determine their statistical strength behavior. Statistical results indicate that a two Parameter Weibull distribution model provides better overall characterization of material behavior for the graphite-epoxy systems tested than does the standard Normal distribution model that is employed for most design work. While either a Weibull or Normal distribution model provides adequate predictions for average strength values, the Weibull model provides better characterization in the lower tail region where the predictions are of maximum design interest. The two sets of the same material were found to have essentially the same material properties, and indicate that repeatability can be achieved.
NASA Astrophysics Data System (ADS)
Greco, Roberto; Pagano, Luca
2017-12-01
To manage natural risks, an increasing effort is being put in the development of early warning systems (EWS), namely, approaches facing catastrophic phenomena by timely forecasting and alarm spreading throughout exposed population. Research efforts aimed at the development and implementation of effective EWS should especially concern the definition and calibration of the interpretative model. This paper analyses the main features characterizing predictive models working in EWS by discussing their aims and their features in terms of model accuracy, evolutionary stage of the phenomenon at which the prediction is carried out and model architecture. Original classification criteria based on these features are developed throughout the paper and shown in their practical implementation through examples of flow-like landslides and earth flows, both of which are characterized by rapid evolution and quite representative of many applications of EWS.
Dynamics of a distributed drill string system: Characteristic parameters and stability maps
NASA Astrophysics Data System (ADS)
Aarsnes, Ulf Jakob F.; van de Wouw, Nathan
2018-03-01
This paper involves the dynamic (stability) analysis of distributed drill-string systems. A minimal set of parameters characterizing the linearized, axial-torsional dynamics of a distributed drill string coupled through the bit-rock interaction is derived. This is found to correspond to five parameters for a simple drill string and eight parameters for a two-sectioned drill-string (e.g., corresponding to the pipe and collar sections of a drilling system). These dynamic characterizations are used to plot the inverse gain margin of the system, parametrized in the non-dimensional parameters, effectively creating a stability map covering the full range of realistic physical parameters. This analysis reveals a complex spectrum of dynamics not evident in stability analysis with lumped models, thus indicating the importance of analysis using distributed models. Moreover, it reveals trends concerning stability properties depending on key system parameters useful in the context of system and control design aiming at the mitigation of vibrations.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Radical chiral Floquet phases in a periodically driven Kitaev model and beyond
NASA Astrophysics Data System (ADS)
Po, Hoi Chun; Fidkowski, Lukasz; Vishwanath, Ashvin; Potter, Andrew C.
2017-12-01
We theoretically discover a family of nonequilibrium fractional topological phases in which time-periodic driving of a 2D system produces excitations with fractional statistics, and produces chiral quantum channels that propagate a quantized fractional number of qubits along the sample edge during each driving period. These phases share some common features with fractional quantum Hall states, but are sharply distinct dynamical phenomena. Unlike the integer-valued invariant characterizing the equilibrium quantum Hall conductance, these phases are characterized by a dynamical topological invariant that is a square root of a rational number, inspiring the label: radical chiral Floquet phases. We construct solvable models of driven and interacting spin systems with these properties, and identify an unusual bulk-boundary correspondence between the chiral edge dynamics and bulk "anyon time-crystal" order characterized by dynamical transmutation of electric-charge into magnetic-flux excitations in the bulk.
Sediment unmixing using detrital geochronology
Sharman, Glenn R.; Johnstone, Samuel
2017-01-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the influence of environmental forcings (e.g., tectonism, climate) on the earth’s surface. Here we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First we summarize ‘top-down’ mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions (‘parents’) that characterize a derived sample or set of samples (‘daughters’). Second we propose the use of ‘bottom-up’ methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable mixtures over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
Sediment unmixing using detrital geochronology
NASA Astrophysics Data System (ADS)
Sharman, Glenn R.; Johnstone, Samuel A.
2017-11-01
Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.
High Temperature Test Facility Preliminary RELAP5-3D Input Model Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.
NASA Technical Reports Server (NTRS)
Kubat, Gregory
2016-01-01
This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.
ERIC Educational Resources Information Center
Greene, Jeffrey A.; Azevedo, Roger A.; Torney-Purta, Judith
2008-01-01
We propose an integration of aspects of several developmental and systems of beliefs models of personal epistemology. Qualitatively different positions, including realism, dogmatism, skepticism, and rationalism, are characterized according to individuals' beliefs across three dimensions in a model of epistemic and ontological cognition. This model…
Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan
2016-01-01
A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.
Development of a Radar-Frequency Metamaterial Measurement and Characterization Apparatus
2012-03-01
Patterson Air Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. The views expressed...17 GTRI Focused Beam System ................................................................................... 22 Shelby Parallel-Plate Waveguide...System ......................................................... 23 Figure 4: CST Model of Shelby PPWG
Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.
Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H
2016-11-01
This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system. Copyright © 2016 Elsevier Ltd. All rights reserved.
Indoor characterization of the receiver for the novel InPhoCUS concrete tracker CPV system
NASA Astrophysics Data System (ADS)
Pravettoni, Mauro; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo
2012-10-01
The Swiss consortium InPhoCUS has been formed between Airlight Energy Manufacturing SA, the Swiss Federal Institute of Technology and the University of Applied Sciences and Arts of Southern Switzerland (thermal modelling and CPV characterization and qualification, respectively). The consortium is developing an innovative 50-meter long, 9-meter wide, 2-axis concentrating system. The secondary tracking axis allows reaching concentration ratios as high as 500X. Indoor characterization of the 5-cell receiver has been performed to test the effects of the cell-to-cell non-uniformity of irradiance. Results are presented in this work and are also helpful in the development of new techniques for the indoor characterization of CPV receivers.
Aerial Measuring System Sensor Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. S. Detwiler
2002-04-01
This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less
NASA Astrophysics Data System (ADS)
Ghatak, D.; Zaitchik, B. F.; Limaye, A. S.; Searby, N. D.; Doorn, B.; Bolten, J. D.; Toll, D. L.; Lee, S.; Mourad, B.; Narula, K.; Nischal, S.; Iceland, C.; Bajracharya, B.; Kumar, S.; Shrestha, B. R.; Murthy, M.; Hain, C.; Anderson, M. C.
2015-12-01
South Asia faces severe challenges to meet the need of water for agricultural, domestic and industrial purposes while coping with the threats posed by climate and land use/cover changes on regional hydrology. South Asia is also characterized by extreme climate contrasts, remote and poorly-monitored headwaters regions, and large uncertainties in estimates of consumptive water withdrawals. Here, we present results from the South Asia Land Data Assimilation System (South Asia LDAS) that apply multiple simulations involving different combination of forcing datasets, land surface models, and satellite-derived parameter datasets to characterize the distributed water balance of the subcontinent. The South Asia LDAS ensemble of simulations provides a range of uncertainty associated with model products. The system includes customized irrigation schemes to capture water use and HYMAP streamflow routing for application to floods. This presentation focuses on two key application areas for South Asia LDAS: the representation of extreme floods in transboundary rivers, and the estimate of water use in irrigated agriculture. We show that South Asia LDAS captures important features of both phenomena, address opportunities and barriers for the use of South Asia LDAS in decision support, and review uncertainties and limitations.This work is being performed by an interdisciplinary team of scientists and decision makers, to ensure that the modeling system meets the needs of decision makers at national and regional levels.
A Bayesian system to detect and characterize overlapping outbreaks.
Aronis, John M; Millett, Nicholas E; Wagner, Michael M; Tsui, Fuchiang; Ye, Ye; Ferraro, Jeffrey P; Haug, Peter J; Gesteland, Per H; Cooper, Gregory F
2017-09-01
Outbreaks of infectious diseases such as influenza are a significant threat to human health. Because there are different strains of influenza which can cause independent outbreaks, and influenza can affect demographic groups at different rates and times, there is a need to recognize and characterize multiple outbreaks of influenza. This paper describes a Bayesian system that uses data from emergency department patient care reports to create epidemiological models of overlapping outbreaks of influenza. Clinical findings are extracted from patient care reports using natural language processing. These findings are analyzed by a case detection system to create disease likelihoods that are passed to a multiple outbreak detection system. We evaluated the system using real and simulated outbreaks. The results show that this approach can recognize and characterize overlapping outbreaks of influenza. We describe several extensions that appear promising. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dagdeviren, Canan; Shi, Yan; Joe, Pauline; Ghaffari, Roozbeh; Balooch, Guive; Usgaonkar, Karan; Gur, Onur; Tran, Phat L.; Crosby, Jessi R.; Meyer, Marcin; Su, Yewang; Chad Webb, R.; Tedesco, Andrew S.; Slepian, Marvin J.; Huang, Yonggang; Rogers, John A.
2015-07-01
Mechanical assessment of soft biological tissues and organs has broad relevance in clinical diagnosis and treatment of disease. Existing characterization methods are invasive, lack microscale spatial resolution, and are tailored only for specific regions of the body under quasi-static conditions. Here, we develop conformal and piezoelectric devices that enable in vivo measurements of soft tissue viscoelasticity in the near-surface regions of the epidermis. These systems achieve conformal contact with the underlying complex topography and texture of the targeted skin, as well as other organ surfaces, under both quasi-static and dynamic conditions. Experimental and theoretical characterization of the responses of piezoelectric actuator-sensor pairs laminated on a variety of soft biological tissues and organ systems in animal models provide information on the operation of the devices. Studies on human subjects establish the clinical significance of these devices for rapid and non-invasive characterization of skin mechanical properties.
An equivalent circuit model of supercapacitors for applications in wireless sensor networks
NASA Astrophysics Data System (ADS)
Yang, Hengzhao; Zhang, Ying
2011-04-01
Energy harvesting technologies have been extensively researched to develop long-lived wireless sensor networks. To better utilize the harvested energy, various energy storage systems are proposed. A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance (VLR) to characterize the self-discharge process. The voltage and temperature dependence of the VLR values is also discussed. Results show that the VLR model is more accurate than the energy recursive equation (ERE) models for short term wireless sensor network applications.
Preliminary Chaotic Model of Snapover on High Voltage Solar Cells
NASA Technical Reports Server (NTRS)
Mackey, Willie R.
1995-01-01
High voltage power systems in space will interact with the space plasma in a variety of ways. One of these, Snapover, is characterized by a sudden enlargement of the electron current collection area across normally insulating surfaces. A power drain on solar array power systems will results from this enhanced current collection. Optical observations of the snapover phenomena in the laboratory indicates a functional relation between bia potential and surface glow area. This paper shall explore the potential benefits of modeling the relation between current and bia potential as an aspect of bifurcation analysis in chaos theory. Successful characterizations of snapover as a chaotic phenomena may provide a means of snapover prevention and control through chaotic synchronization.
Characterization of structural connections for multicomponent systems
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Huckelbridge, Arthur A.
1988-01-01
This study explores combining Component Mode Synthesis methods for coupling structural components with Parameter Identification procedures for improving the analytical modeling of the connections. Improvements in the connection stiffness and damping properties are computed in terms of physical parameters so that the physical characteristics of the connections can be better understood, in addition to providing improved input for the system model.
On-Orbit System Identification
NASA Technical Reports Server (NTRS)
Mettler, E.; Milman, M. H.; Bayard, D.; Eldred, D. B.
1987-01-01
Information derived from accelerometer readings benefits important engineering and control functions. Report discusses methodology for detection, identification, and analysis of motions within space station. Techniques of vibration and rotation analyses, control theory, statistics, filter theory, and transform methods integrated to form system for generating models and model parameters that characterize total motion of complicated space station, with respect to both control-induced and random mechanical disturbances.
Mathematical Modeling of Ni/H2 and Li-Ion Batteries
NASA Technical Reports Server (NTRS)
Weidner, John W.; White, Ralph E.; Dougal, Roger A.
2001-01-01
The modelling effort outlined in this viewgraph presentation encompasses the following topics: 1) Electrochemical Deposition of Nickel Hydroxide; 2) Deposition rates of thin films; 3) Impregnation of porous electrodes; 4) Experimental Characterization of Nickel Hydroxide; 5) Diffusion coefficients of protons; 6) Self-discharge rates (i.e., oxygen-evolution kinetics); 7) Hysteresis between charge and discharge; 8) Capacity loss on cycling; 9) Experimental Verification of the Ni/H2 Battery Model; 10) Mathematical Modeling Li-Ion Batteries; 11) Experimental Verification of the Li-Ion Battery Model; 11) Integrated Power System Models for Satellites; and 12) Experimental Verification of Integrated-Systems Model.
Geochemistry and the Understanding of Groundwater Systems
NASA Astrophysics Data System (ADS)
Glynn, P. D.; Plummer, L. N.; Weissmann, G. S.; Stute, M.
2009-12-01
Geochemical techniques and concepts have made major contributions to the understanding of groundwater systems. Advances continue to be made through (1) development of measurement and characterization techniques, (2) improvements in computer technology, networks and numerical modeling, (3) investigation of coupled geologic, hydrologic, geochemical and biologic processes, and (4) scaling of individual observations, processes or subsystem models into larger coherent model frameworks. Many applications benefit from progress in these areas, such as: (1) understanding paleoenvironments, in particular paleoclimate, through the use of groundwater archives, (2) assessing the sustainability (recharge and depletion) of groundwater resources, and (3) their vulnerability to contamination, (4) evaluating the capacity and consequences of subsurface waste isolation (e.g. geologic carbon sequestration, nuclear and chemical waste disposal), (5) assessing the potential for mitigation/transformation of anthropogenic contaminants in groundwater systems, and (6) understanding the effect of groundwater lag times in ecosystem-scale responses to natural events, land-use changes, human impacts, and remediation efforts. Obtaining “representative” groundwater samples is difficult and progress in obtaining “representative” samples, or interpreting them, requires new techniques in characterizing groundwater system heterogeneity. Better characterization and simulation of groundwater system heterogeneity (both physical and geochemical) is critical to interpreting the meaning of groundwater “ages”; to understanding and predicting groundwater flow, solute transport, and geochemical evolution; and to quantifying groundwater recharge and discharge processes. Research advances will also come from greater use and progress (1) in the application of environmental tracers to ground water dating and in the analysis of new geochemical tracers (e.g. compound specific isotopic analyses, noble gas isotopes, analyses of natural organic tracers), (2) in inverse geochemical and hydrological modeling, (3) in the understanding and simulation of coupled biological, geological, geochemical and hydrological processes, and (4) in the description and quantification of processes occurring at the boundaries of groundwater systems (e.g. unsaturated zone processes, groundwater/surface water interactions, impacts of changing geomorphology and vegetation). Improvements are needed in the integration of widely diverse information. Better techniques are needed to construct coherent conceptual frameworks from individual observations, simulated or reconstructed information, process models, and intermediate scale models. Iterating between data collection, interpretation, and the application of forward, inverse, and statistical modeling tools is likely to provide progress in this area. Quantifying groundwater system processes by using an open-system thermodynamic approach in a common mass- and energy-flow framework will also facilitate comparison and understanding of diverse processes.
Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...
Mathematical Modelling of Continuous Biotechnological Processes
ERIC Educational Resources Information Center
Pencheva, T.; Hristozov, I.; Shannon, A. G.
2003-01-01
Biotechnological processes (BTP) are characterized by a complicated structure of organization and interdependent characteristics. Partial differential equations or systems of partial differential equations are used for their behavioural description as objects with distributed parameters. Modelling of substrate without regard to dispersion…
Chakrabarti, C G; Ghosh, Koyel
2013-10-01
In the present paper we have first introduced a measure of dynamical entropy of an ecosystem on the basis of the dynamical model of the system. The dynamical entropy which depends on the eigenvalues of the community matrix of the system leads to a consistent measure of complexity of the ecosystem to characterize the dynamical behaviours such as the stability, instability and periodicity around the stationary states of the system. We have illustrated the theory with some model ecosystems. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling and evaluating user behavior in exploratory visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling andmore » evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.« less
DOE R&D Accomplishments Database
Goodenough, J. B.; Abruna, H. D.; Buchanan, M. V.
2007-04-04
To identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems.
Bandlimited computerized improvements in characterization of nonlinear systems with memory
NASA Astrophysics Data System (ADS)
Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.
2016-05-01
The present article discusses some inroads in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] over many years of developmental research. The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms on the system are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order in order to combat and reasonably alleviate the curse of dimensionality.
Various models have been proposed for describing the time- and concentration-dependence of toxic effects to aquatic organisms, which would improve characterization of risks in natural systems. Selected models were evaluated using results from a study on the lethality of copper t...
Model of Emotional Expressions in Movements
ERIC Educational Resources Information Center
Rozaliev, Vladimir L.; Orlova, Yulia A.
2013-01-01
This paper presents a new approach to automated identification of human emotions based on analysis of body movements, a recognition of gestures and poses. Methodology, models and automated system for emotion identification are considered. To characterize the person emotions in the model, body movements are described with linguistic variables and a…
EXPERIMENTAL EVALUATION OF TWO SHARP FRONT MODELS FOR VADOSE ZONE NON-AQUEOUS PHASE LIQUID TRANSPORT
Recent research efforts on the transport of immiscible organic wastes in subsurface the development of numerical models of various levels of sophistication. Systems have focused on the site characterization data needed to obtain. However, in real field applications, the model p...
The U.S. Environmental Protection Agency (U.S. EPA) is extending its Models-3/Community Multiscale Air Quality (CMAQ) Modeling System to provide detailed gridded air quality concentration fields and sub-grid variability characterization at neighborhood scales and in urban areas...
NASA Technical Reports Server (NTRS)
Yu, Hongbin; Remer, Lorraine A.; Kahn, Ralph A.; Chin, Mian; Zhang, Yan
2012-01-01
Evidence of aerosol intercontinental transport (ICT) is both widespread and compelling. Model simulations suggest that ICT could significantly affect regional air quality and climate, but the broad inter-model spread of results underscores a need of constraining model simulations with measurements. Satellites have inherent advantages over in situ measurements to characterize aerosol ICT, because of their spatial and temporal coverage. Significant progress in satellite remote sensing of aerosol properties during the Earth Observing System (EOS) era offers opportunity to increase quantitative characterization and estimates of aerosol ICT, beyond the capability of pre-EOS era satellites that could only qualitatively track aerosol plumes. EOS satellites also observe emission strengths and injection heights of some aerosols, aerosol precursors, and aerosol-related gases, which can help characterize aerosol ICT. After an overview of these advances, we review how the current generation of satellite measurements have been used to (1) characterize the evolution of aerosol plumes (e.g., both horizontal and vertical transport, and properties) on an episodic basis, (2) understand the seasonal and inter-annual variations of aerosol ICT and their control factors, (3) estimate the export and import fluxes of aerosols, and (4) evaluate and constrain model simulations. Substantial effort is needed to further explore an integrated approach using measurements from on-orbit satellites (e.g., A-Train synergy) for observational characterization and model constraint of aerosol intercontinental transport and to develop advanced sensors for future missions.
NASA Astrophysics Data System (ADS)
Zhang, R.; Borgia, A.; Daley, T. M.; Oldenburg, C. M.; Jung, Y.; Lee, K. J.; Doughty, C.; Altundas, B.; Chugunov, N.; Ramakrishnan, T. S.
2017-12-01
Subsurface permeable faults and fracture networks play a critical role for enhanced geothermal systems (EGS) by providing conduits for fluid flow. Characterization of the permeable flow paths before and after stimulation is necessary to evaluate and optimize energy extraction. To provide insight into the feasibility of using CO2 as a contrast agent to enhance fault characterization by seismic methods, we model seismic monitoring of supercritical CO2 (scCO2) injected into a fault. During the CO2 injection, the original brine is replaced by scCO2, which leads to variations in geophysical properties of the formation. To explore the technical feasibility of the approach, we present modeling results for different time-lapse seismic methods including surface seismic, vertical seismic profiling (VSP), and a cross-well survey. We simulate the injection and production of CO2 into a normal fault in a system based on the Brady's geothermal field and model pressure and saturation variations in the fault zone using TOUGH2-ECO2N. The simulation results provide changing fluid properties during the injection, such as saturation and salinity changes, which allow us to estimate corresponding changes in seismic properties of the fault and the formation. We model the response of the system to active seismic monitoring in time-lapse mode using an anisotropic finite difference method with modifications for fracture compliance. Results to date show that even narrow fault and fracture zones filled with CO2 can be better detected using the VSP and cross-well survey geometry, while it would be difficult to image the CO2 plume by using surface seismic methods.
Photovoltaic performance models - A report card
NASA Technical Reports Server (NTRS)
Smith, J. H.; Reiter, L. R.
1985-01-01
Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.
Maturity of hospital information systems: Most important influencing factors.
Vidal Carvalho, João; Rocha, Álvaro; Abreu, António
2017-07-01
Maturity models facilitate organizational management, including information systems management, with hospital organizations no exception. This article puts forth a study carried out with a group of experts in the field of hospital information systems management with a view to identifying the main influencing factors to be included in an encompassing maturity model for hospital information systems management. This study is based on the results of a literature review, which identified maturity models in the health field and relevant influencing factors. The development of this model is justified to the extent that the available maturity models for the hospital information systems management field reveal multiple limitations, including lack of detail, absence of tools to determine their maturity and lack of characterization for stages of maturity structured by different influencing factors.
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Wagener, T.; Stewart, S.; Mahmoud, M.; Hartmann, H.; Springer, E.
2007-12-01
Some of the most challenging issues facing contemporary water resources management are those typified by complex coupled human-environmental systems with poorly characterized uncertainties. In other words, major decisions regarding water resources have to be made in the face of substantial uncertainty and complexity. It has been suggested that integrated models can be used to coherently assemble information from a broad set of domains, and can therefore serve as an effective means for tackling the complexity of environmental systems. Further, well-conceived scenarios can effectively inform decision making, particularly when high complexity and poorly characterized uncertainties make the problem intractable via traditional uncertainty analysis methods. This presentation discusses the integrated modeling framework adopted by SAHRA, an NSF Science & Technology Center, to investigate stakeholder-driven water sustainability issues within the semi-arid southwestern US. The multi-disciplinary, multi-resolution modeling framework incorporates a formal scenario approach to analyze the impacts of plausible (albeit uncertain) alternative futures to support adaptive management of water resources systems. Some of the major challenges involved in, and lessons learned from, this effort will be discussed.
The Triple Value Model: A Systems Approach to Sustainable Solutions
The unintended environmental impacts of economic development threaten the continued availability of ecosystem services that are critical to human well being. An integrated systems approach is needed to characterize sustainability problems and evaluate potential solutions. The T...
Vishwanath, Karthik; Chang, Kevin; Klein, Daniel; Deng, Yu Feng; Chang, Vivide; Phelps, Janelle E; Ramanujam, Nimmi
2011-02-01
Steady-state diffuse reflection spectroscopy is a well-studied optical technique that can provide a noninvasive and quantitative method for characterizing the absorption and scattering properties of biological tissues. Here, we compare three fiber-based diffuse reflection spectroscopy systems that were assembled to create a light-weight, portable, and robust optical spectrometer that could be easily translated for repeated and reliable use in mobile settings. The three systems were built using a broadband light source and a compact, commercially available spectrograph. We tested two different light sources and two spectrographs (manufactured by two different vendors). The assembled systems were characterized by their signal-to-noise ratios, the source-intensity drifts, and detector linearity. We quantified the performance of these instruments in extracting optical properties from diffuse reflectance spectra in tissue-mimicking liquid phantoms with well-controlled optical absorption and scattering coefficients. We show that all assembled systems were able to extract the optical absorption and scattering properties with errors less than 10%, while providing greater than ten-fold decrease in footprint and cost (relative to a previously well-characterized and widely used commercial system). Finally, we demonstrate the use of these small systems to measure optical biomarkers in vivo in a small-animal model cancer therapy study. We show that optical measurements from the simple portable system provide estimates of tumor oxygen saturation similar to those detected using the commercial system in murine tumor models of head and neck cancer.
NASA Astrophysics Data System (ADS)
Förner, K.; Polifke, W.
2017-10-01
The nonlinear acoustic behavior of Helmholtz resonators is characterized by a data-based reduced-order model, which is obtained by a combination of high-resolution CFD simulation and system identification. It is shown that even in the nonlinear regime, a linear model is capable of describing the reflection behavior at a particular amplitude with quantitative accuracy. This observation motivates to choose a local-linear model structure for this study, which consists of a network of parallel linear submodels. A so-called fuzzy-neuron layer distributes the input signal over the linear submodels, depending on the root mean square of the particle velocity at the resonator surface. The resulting model structure is referred to as an local-linear neuro-fuzzy network. System identification techniques are used to estimate the free parameters of this model from training data. The training data are generated by CFD simulations of the resonator, with persistent acoustic excitation over a wide range of frequencies and sound pressure levels. The estimated nonlinear, reduced-order models show good agreement with CFD and experimental data over a wide range of amplitudes for several test cases.
Characterization of 8-cm engineering model thruster
NASA Technical Reports Server (NTRS)
Williamson, W. S.
1984-01-01
Development of 8 cm ion thruster technology which was conducted in support of the Ion Auxiliary Propulsion System (IAPS) flight contract (Contract NAS3-21055) is discussed. The work included characterization of thruster performance, stability, and control; a study of the effects of cathode aging; environmental qualification testing; and cyclic lifetesting of especially critical thruster components.
Improving Planetary Rover Attitude Estimation via MEMS Sensor Characterization
Hidalgo, Javier; Poulakis, Pantelis; Köhler, Johan; Del-Cerro, Jaime; Barrientos, Antonio
2012-01-01
Micro Electro-Mechanical Systems (MEMS) are currently being considered in the space sector due to its suitable level of performance for spacecrafts in terms of mechanical robustness with low power consumption, small mass and size, and significant advantage in system design and accommodation. However, there is still a lack of understanding regarding the performance and testing of these new sensors, especially in planetary robotics. This paper presents what is missing in the field: a complete methodology regarding the characterization and modeling of MEMS sensors with direct application. A reproducible and complete approach including all the intermediate steps, tools and laboratory equipment is described. The process of sensor error characterization and modeling through to the final integration in the sensor fusion scheme is explained with detail. Although the concept of fusion is relatively easy to comprehend, carefully characterizing and filtering sensor information is not an easy task and is essential for good performance. The strength of the approach has been verified with representative tests of novel high-grade MEMS inertia sensors and exemplary planetary rover platforms with promising results. PMID:22438761
A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721
NASA Astrophysics Data System (ADS)
Wang, Chao; Durney, Krista M.; Fomovsky, Gregory; Ateshian, Gerard A.; Vukelic, Sinisa
2016-03-01
The onset of osteoarthritis (OA)in articular cartilage is characterized by degradation of extracellular matrix (ECM). Specifically, breakage of cross-links between collagen fibrils in the articular cartilage leads to loss of structural integrity of the bulk tissue. Since there are no broadly accepted, non-invasive, label-free tools for diagnosing OA at its early stage, Raman spectroscopyis therefore proposed in this work as a novel, non-destructive diagnostic tool. In this study, collagen thin films were employed to act as a simplified model system of the cartilage collagen extracellular matrix. Cross-link formation was controlled via exposure to glutaraldehyde (GA), by varying exposure time and concentration levels, and Raman spectral information was collected to quantitatively characterize the cross-link assignments imparted to the collagen thin films during treatment. A novel, quantitative method was developed to analyze the Raman signal obtained from collagen thin films. Segments of Raman signal were decomposed and modeled as the sum of individual bands, providing an optimization function for subsequent curve fitting against experimental findings. Relative changes in the concentration of the GA-induced pyridinium cross-links were extracted from the model, as a function of the exposure to GA. Spatially resolved characterization enabled construction of spectral maps of the collagen thin films, which provided detailed information about the variation of cross-link formation at various locations on the specimen. Results showed that Raman spectral data correlate with glutaraldehyde treatment and therefore may be used as a proxy by which to measure loss of collagen cross-links in vivo. This study proposes a promising system of identifying onset of OA and may enable early intervention treatments that may serve to slow or prevent osteoarthritis progression.
Ma, Dandan; Ren, Haisheng; Ma, Jianyi
2018-02-14
Full-dimensional quantum mechanics calculations were performed to determine the vibrational energy levels of HOCO and DOCO based on an accurate potential energy surface. Almost all of the vibrational energy levels up to 3500 cm -1 from the vibrational ground state were assigned, and the calculated energy levels in this work are well in agreement with the reported results by Bowman. The corresponding full dimensional wavefunctions present some special features. When the energy level approaches the barrier height, the trans-HOCO and cis-HOCO states strongly couple through tunneling interactions, and the tunneling interaction and Fermi resonance were observed in the DOCO system. The energy level patterns of trans-HOCO, cis-HOCO and trans-DOCO provide a reasonable fitted barrier height using the fitting formula of Field et al., however, a discrepancy exists for the cis-DOCO species which is considered as a random event. Our full-dimensional calculations give positive evidence for the accuracy of the spectroscopic characterization model of the isomerization transition state reported by Field et al., which was developed from one-dimensional model systems. Furthermore, the special case of cis-DOCO in this work means that the isotopic substitution can solve the problem of the accidental failure of Field's spectroscopic characterization model.
NASA Astrophysics Data System (ADS)
Ivanov, Rossen I.; Prodanov, Emil M.
2018-01-01
The cosmological dynamics of a quintessence model based on real gas with general equation of state is presented within the framework of a three-dimensional dynamical system describing the time evolution of the number density, the Hubble parameter and the temperature. Two global first integrals are found and examples for gas with virial expansion and van der Waals gas are presented. The van der Waals system is completely integrable. In addition to the unbounded trajectories, stemming from the presence of the conserved quantities, stable periodic solutions (closed orbits) also exist under certain conditions and these represent models of a cyclic Universe. The cyclic solutions exhibit regions characterized by inflation and deflation, while the open trajectories are characterized by inflation in a “fly-by” near an unstable critical point.
Integrated spatiotemporal characterization of dust sources and outbreaks in Central and East Asia
NASA Astrophysics Data System (ADS)
Darmenova, Kremena T.
The potential of atmospheric dust aerosols to modify the Earth's environment and climate has been recognized for some time. However, predicting the diverse impact of dust has several significant challenges. One is to quantify the complex spatial and temporal variability of dust burden in the atmosphere. Another is to quantify the fraction of dust originating from human-made sources. This thesis focuses on the spatiotemporal characterization of sources and dust outbreaks in Central and East Asia by integrating ground-based data, satellite multisensor observations, and modeling. A new regional dust modeling system capable of operating over a span of scales was developed. The modeling system consists of a dust module DuMo, which incorporates several dust emission schemes of different complexity, and the PSU/NCAR mesoscale model MM5, which offers a variety of physical parameterizations and flexible nesting capability. The modeling system was used to perform for the first time a comprehensive study of the timing, duration, and intensity of individual dust events in Central and East Asia. Determining the uncertainties caused by the choice of model physics, especially the boundary layer parameterization, and the dust production scheme was the focus of our study. Implications to assessments of the anthropogenic dust fraction in these regions were also addressed. Focusing on Spring 2001, an analysis of routine surface meteorological observations and satellite multi-sensor data was carried out in conjunction with modeling to determine the extent to which integrated data set can be used to characterize the spatiotemporal distribution of dust plumes at a range of temporal scales, addressing the active dust sources in China and Mongolia, mid-range transport and trans-Pacific, long-range transport of dust outbreaks on a case-by-case basis. This work demonstrates that adequate and consistent characterization of individual dust events is central to establishing a reliable climatology, ultimately leading to improved assessments of dust impacts on the environment and climate. This will also help to identify the appropriate temporal and spatial scales for adequate intercomparison between model results and observational data as well as for developing an integrated analysis methodology for dust studies.
Sun, Wenjun; Liu, Wenjun; Cui, Lifeng; Zhang, Minglu; Wang, Bei
2013-08-01
This study describes the identification and characterization of a new chlorine resistant bacterium, Sphingomonas TS001, isolated from a model drinking water distribution system. The isolate was identified by 16s rRNA gene analysis and morphological and physiological characteristics. Phylogenetic analysis indicates that TS001 belongs to the genus Sphingomonas. The model distribution system HPC results showed that, when the chlorine residual was greater than 0.7 mg L(-1), 100% of detected heterotrophic bacteria (HPC) was TS001. The bench-scale inactivation efficiency testing showed that this strain was very resistant to chlorine, and 4 mg L(-1) of chlorine with 240 min retention time provided only approximately 5% viability reduction of TS001. In contrast, a 3-log inactivation (99.9%) was obtained for UV fluencies of 40 mJ cm(-2). A high chlorine-resistant and UV sensitive bacterium, Sphingomonas TS001, was documented for the first time. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Terban, Maxwell W.
Nanoscale structural characterization is critical to understanding the physical underpinnings of properties and behavior in materials with technological applications. The work herein shows how the pair distribution function technique can be applied to x-ray total scattering data for material systems which weakly scatter x-rays, a typically difficult task due to the poor signal-to-noise obtained from the structures of interest. Characterization and structural modeling are demonstrated for a variety of molecular and porous systems, along with the detection and characterization of disordered, minority phases and components. In particular, reliable detection and quantitative analysis are demonstrated for nanocrystals of an active pharmaceutical ingredient suspended in dilute solution down to a concentration of 0.25 wt. %, giving a practical limit of detection for ordered nanoscale phases within a disordered matrix. Further work shows that minority nanocrystalline phases can be detected, fingerprinted, and modeled for mixed crystalline and amorphous systems of small molecules and polymers. The crystallization of amorphous lactose is followed under accelerated aging conditions. Melt quenching is shown to produce a different local structure than spray drying or freeze drying, along with increased resistance to crystallization. The initial phases which form in the spray dried formulation are identified as a mixture of polymorphs different from the final alpha-lactose monohydrate form. Hard domain formation in thermoplastic polyurethanes is also characterized as a function of methylene diphenyl diisocyanate and butanediol component ratio, showing that distinct and different hard phase structures can form and are solved by indexing with structures derived from molecular dynamics relaxation. In both cases, phase fractions can be quantified in the mixed crystalline and amorphous systems by fitting with both standards or structure models. Later chapters, demonstrate pair distribution characterization of particle incorporation, structure, and synthesis of nanoporous materials. Nanoparticle size distributions are extracted from platinum nanoparticles nucleating within a zeolite matrix through structural modeling, and validated by transmission electron microscope studies. The structure of zirconium phosphonate-phosphate unconventional metal organic framework is determined to consist of turbostratically disordered nanocrystalline layers of Zr-phenylphosphonate, and the local environment of terbium intercalated between the layers is found to resemble the local environment in scheelite-type terbium phosphate. Finally, the early stages of reaction between aqueous zinc dinitrate hexahydrate and methanolic 2-methylimidazole are characterized using in situ total scattering measurements, showing that secondary building units of tetrahedrally coordinated by 2-methylimidazole initially form upon reaction. Overall, the methodologies are developed and applied toward phase detection, identification, solution, and behavior in pharmaceuticals, polymers, and nanoporous materials along with advice for carrying out experiments and analysis on such materials such that they can be extended to other similar systems.
NASA Astrophysics Data System (ADS)
Dwivedi, Anurag
2014-06-01
The motivation for this work comes from a desire to improve resilience of mission critical cyber enabled systems including those used in critical infrastructure domains such as cyber, power, water, fuel, financial, healthcare, agriculture, and manufacturing. Resilience can be defined as the ability of a system to persistently meet its performance requirements despite the occurrence of adverse events. Characterizing the resilience of a system requires a clear definition of the performance requirements of the system of interest and an ability to quantify the impact on performance by the adverse events of concern. A quantitative characterization of system resilience allows the resilience requirements to be included in the system design criteria. Resilience requirements of a system are derived from the service level agreements (SLAs), measures of effectiveness (MOEs), and measures of performance (MOPs) of the services or missions supported by the system. This paper describes a methodology for designing resilient systems. The components of the methodology include resilience characterization for threat models associated with various exposure modes, requirements mapping, subsystem ranking based on criticality, and selective implementation of mitigations to improve system resilience to a desired level.
Whitney, John W.; O'Leary, Dennis W.
1993-01-01
Tectonic characterization of a potential high-level nuclear waste repository at Yucca Mountain, Nevada, is needed to assess seismic and possible volcanic hazards that could affect the site during the preclosure (next 100 years) and the behavior of the hydrologic system during the postclosure (the following 10,000 years) periods. Tectonic characterization is based on assembling mapped geological structures in their chronological order of development and activity, and interpreting their dynamic interrelationships. Addition of mechanistic models and kinematic explanations for the identified tectonic processes provides one or more tectonic models having predictive power. Proper evaluation and application of tectonic models can aid in seismic design and help anticipate probable occurrence of future geologic events of significance to the repository and its design.
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
Multiscale dispersion-state characterization of nanocomposites using optical coherence tomography
Schneider, Simon; Eppler, Florian; Weber, Marco; Olowojoba, Ganiu; Weiss, Patrick; Hübner, Christof; Mikonsaari, Irma; Freude, Wolfgang; Koos, Christian
2016-01-01
Nanocomposite materials represent a success story of nanotechnology. However, development of nanomaterial fabrication still suffers from the lack of adequate analysis tools. In particular, achieving and maintaining well-dispersed particle distributions is a key challenge, both in material development and industrial production. Conventional methods like optical or electron microscopy need laborious, costly sample preparation and do not permit fast extraction of nanoscale structural information from statistically relevant sample volumes. Here we show that optical coherence tomography (OCT) represents a versatile tool for nanomaterial characterization, both in a laboratory and in a production environment. The technique does not require sample preparation and is applicable to a wide range of solid and liquid material systems. Large particle agglomerates can be directly found by OCT imaging, whereas dispersed nanoparticles are detected by model-based analysis of depth-dependent backscattering. Using a model system of polystyrene nanoparticles, we demonstrate nanoparticle sizing with high accuracy. We further prove the viability of the approach by characterizing highly relevant material systems based on nanoclays or carbon nanotubes. The technique is perfectly suited for in-line metrology in a production environment, which is demonstrated using a state-of-the-art compounding extruder. These experiments represent the first demonstration of multiscale nanomaterial characterization using OCT. PMID:27557544
Multiscale dispersion-state characterization of nanocomposites using optical coherence tomography.
Schneider, Simon; Eppler, Florian; Weber, Marco; Olowojoba, Ganiu; Weiss, Patrick; Hübner, Christof; Mikonsaari, Irma; Freude, Wolfgang; Koos, Christian
2016-08-25
Nanocomposite materials represent a success story of nanotechnology. However, development of nanomaterial fabrication still suffers from the lack of adequate analysis tools. In particular, achieving and maintaining well-dispersed particle distributions is a key challenge, both in material development and industrial production. Conventional methods like optical or electron microscopy need laborious, costly sample preparation and do not permit fast extraction of nanoscale structural information from statistically relevant sample volumes. Here we show that optical coherence tomography (OCT) represents a versatile tool for nanomaterial characterization, both in a laboratory and in a production environment. The technique does not require sample preparation and is applicable to a wide range of solid and liquid material systems. Large particle agglomerates can be directly found by OCT imaging, whereas dispersed nanoparticles are detected by model-based analysis of depth-dependent backscattering. Using a model system of polystyrene nanoparticles, we demonstrate nanoparticle sizing with high accuracy. We further prove the viability of the approach by characterizing highly relevant material systems based on nanoclays or carbon nanotubes. The technique is perfectly suited for in-line metrology in a production environment, which is demonstrated using a state-of-the-art compounding extruder. These experiments represent the first demonstration of multiscale nanomaterial characterization using OCT.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
NASA Astrophysics Data System (ADS)
Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael
2013-05-01
This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.
NASA Astrophysics Data System (ADS)
Various papers on photovoltaics are presented. The general topics considered include: amorphous materials and cells; amorphous silicon-based solar cells and modules; amorphous silicon-based materials and processes; amorphous materials characterization; amorphous silicon; high-efficiency single crystal solar cells; multijunction and heterojunction cells; high-efficiency III-V cells; modeling and characterization of high-efficiency cells; LIPS flight experience; space mission requirements and technology; advanced space solar cell technology; space environmental effects and modeling; space solar cell and array technology; terrestrial systems and array technology; terrestrial utility and stand-alone applications and testing; terrestrial concentrator and storage technology; terrestrial stand-alone systems applications; terrestrial systems test and evaluation; terrestrial flatplate and concentrator technology; use of polycrystalline materials; polycrystalline II-VI compound solar cells; analysis of and fabrication procedures for compound solar cells.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Containerless processing of undercooled melts
NASA Technical Reports Server (NTRS)
Perepezko, J. H.
1993-01-01
The investigation focused on the control of microstructural evolution in Mn-Al, Fe-Ni, Ni-V, and Au-Pb-Sb alloys through the high undercooling levels provided by containerless processing, and provided fundamental new information on the control of nucleation. Solidification analysis was conducted by means of thermal analysis, x-ray diffraction, and metallographic characterization on samples processed in a laboratory scale drop tube system. The Mn-Al alloy system offers a useful model system with the capability of phase separation on an individual particle basis, thus permitting a more complete understanding of the operative kinetics and the key containerless processing variables. This system provided the opportunity of analyzing the nucleation rate as a function of processing conditions and allowed for the quantitative assessment of the relevant processing parameters. These factors are essential in the development of a containerless processing model which has a predictive capability. Similarly, Ni-V is a model system that was used to study duplex partitionless solidification, which is a structure possible only in high under cooling solidification processes. Nucleation kinetics for the competing bcc and fcc phases were studied to determine how this structure can develop and the conditions under which it may occur. The Fe-Ni alloy system was studied to identify microstructural transitions with controlled variations in sample size and composition during containerless solidification. This work was forwarded to develop a microstructure map which delineates regimes of structural evolution and provides a unified analysis of experimental observations. The Au-Pb-Sb system was investigated to characterize the thermodynamic properties of the undercooled liquid phase and to characterize the glass transition under a variety of processing conditions. By analyzing key containerless processing parameters in a ground based drop tube study, a carefully designed flight experiment may be planned to utilize the extended duration microgravity conditions of orbiting spacecraft.
ECO-ENERGY DEMONSTRATION MODEL: ANAEROBIC DIGESTION, ALGAE AND ENERGY PROSPERITY
For the project, we:
2005-06-01
test, the entire turbulence model was changed from standard k- epsilon to Spalart- Allmaras. Using these different tools of turbulence models, a few...this research, leaving only pre-existing finite element models to be used. At some point a NASTRAN model was developed for vibrations analysis but
A Method for Assessing the Retention of Trace Elements in Human Body Using Neural Network Technology
Ragimov, Aligejdar; Faizullin, Rashat; Valiev, Vsevolod
2017-01-01
Models that describe the trace element status formation in the human organism are essential for a correction of micromineral (trace elements) deficiency. A direct trace element retention assessment in the body is difficult due to the many internal mechanisms. The trace element retention is determined by the amount and the ratio of incoming and excreted substance. So, the concentration of trace elements in drinking water characterizes the intake, whereas the element concentration in urine characterizes the excretion. This system can be interpreted as three interrelated elements that are in equilibrium. Since many relationships in the system are not known, the use of standard mathematical models is difficult. The artificial neural network use is suitable for constructing a model in the best way because it can take into account all dependencies in the system implicitly and process inaccurate and incomplete data. We created several neural network models to describe the retentions of trace elements in the human body. On the model basis, we can calculate the microelement levels in the body, knowing the trace element levels in drinking water and urine. These results can be used in health care to provide the population with safe drinking water. PMID:29065586
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatt, Uma S.; Wackerbauer, Renate; Polyakov, Igor V.
The goal of this research was to apply fractional and non-linear analysis techniques in order to develop a more complete characterization of climate change and variability for the oceanic, sea ice and atmospheric components of the Earth System. This research applied two measures of dynamical characteristics of time series, the R/S method of calculating the Hurst exponent and Renyi entropy, to observational and modeled climate data in order to evaluate how well climate models capture the long-term dynamics evident in observations. Fractional diffusion analysis was applied to ARGO ocean buoy data to quantify ocean transport. Self organized maps were appliedmore » to North Pacific sea level pressure and analyzed in ways to improve seasonal predictability for Alaska fire weather. This body of research shows that these methods can be used to evaluate climate models and shed light on climate mechanisms (i.e., understanding why something happens). With further research, these methods show promise for improving seasonal to longer time scale forecasts of climate.« less
General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations
NASA Technical Reports Server (NTRS)
Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark
2010-01-01
Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.
Characterization and Calibration of the 12-m Antenna in Warkworth, New Zealand
NASA Technical Reports Server (NTRS)
Gulyaev, Sergei; Natusch, Tim; Wilson, David
2010-01-01
The New Zealand 12-m antenna is scheduled to start participating in regular IVS VLBI sessions from the middle of 2010. Characterization procedures and results of calibration of the New Zealand 12- m radio telescope are presented, including the main reflector surface accuracy measurement, pointing model creation, and the system equivalent flux density (SEFD) determination in both S and X bands. Important issues of network connectivity, co-located geodetic systems, and the use of the antenna in education are also discussed.
Experimental characterization of a quantum many-body system via higher-order correlations.
Schweigler, Thomas; Kasper, Valentin; Erne, Sebastian; Mazets, Igor; Rauer, Bernhard; Cataldini, Federica; Langen, Tim; Gasenzer, Thomas; Berges, Jürgen; Schmiedmayer, Jörg
2017-05-17
Quantum systems can be characterized by their correlations. Higher-order (larger than second order) correlations, and the ways in which they can be decomposed into correlations of lower order, provide important information about the system, its structure, its interactions and its complexity. The measurement of such correlation functions is therefore an essential tool for reading, verifying and characterizing quantum simulations. Although higher-order correlation functions are frequently used in theoretical calculations, so far mainly correlations up to second order have been studied experimentally. Here we study a pair of tunnel-coupled one-dimensional atomic superfluids and characterize the corresponding quantum many-body problem by measuring correlation functions. We extract phase correlation functions up to tenth order from interference patterns and analyse whether, and under what conditions, these functions factorize into correlations of lower order. This analysis characterizes the essential features of our system, the relevant quasiparticles, their interactions and topologically distinct vacua. From our data we conclude that in thermal equilibrium our system can be seen as a quantum simulator of the sine-Gordon model, relevant for diverse disciplines ranging from particle physics to condensed matter. The measurement and evaluation of higher-order correlation functions can easily be generalized to other systems and to study correlations of any other observable such as density, spin and magnetization. It therefore represents a general method for analysing quantum many-body systems from experimental data.
Scientific and educational recommender systems
NASA Astrophysics Data System (ADS)
Guseva, A. I.; Kireev, V. S.; Bochkarev, P. V.; Kuznetsov, I. A.; Philippov, S. A.
2017-01-01
This article discusses the questions associated with the use of reference systems in the preparation of graduates in physical function. The objective of this research is creation of model of recommender system user from the sphere of science and education. The detailed review of current scientific and social network for scientists and the problem of constructing recommender systems in this area. The result of this study is to research user information model systems. The model is presented in two versions: the full one - in the form of a semantic network, and short - in a relational form. The relational model is the projection in the form of semantic network, taking into account the restrictions on the amount of bonds that characterize the number of information items (research results), which interact with the system user.
NASA Technical Reports Server (NTRS)
Hutto, Clayton; Briscoe, Erica; Trewhitt, Ethan
2012-01-01
Societal level macro models of social behavior do not sufficiently capture nuances needed to adequately represent the dynamics of person-to-person interactions. Likewise, individual agent level micro models have limited scalability - even minute parameter changes can drastically affect a model's response characteristics. This work presents an approach that uses agent-based modeling to represent detailed intra- and inter-personal interactions, as well as a system dynamics model to integrate societal-level influences via reciprocating functions. A Cognitive Network Model (CNM) is proposed as a method of quantitatively characterizing cognitive mechanisms at the intra-individual level. To capture the rich dynamics of interpersonal communication for the propagation of beliefs and attitudes, a Socio-Cognitive Network Model (SCNM) is presented. The SCNM uses socio-cognitive tie strength to regulate how agents influence--and are influenced by--one another's beliefs during social interactions. We then present experimental results which support the use of this network analytical approach, and we discuss its applicability towards characterizing and understanding human information processing.
Nitrogen cycling models and their application to forest harvesting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, D.W.; Dale, V.H.
1986-01-01
The characterization of forest nitrogen- (N-) cycling processes by several N-cycling models (FORCYTE, NITCOMP, FORTNITE, and LINKAGES) is briefly reviewed and evaluated against current knowledge of N cycling in forests. Some important processes (e.g., translocation within trees, N dynamics in decaying leaf litter) appear to be well characterized, whereas others (e.g., N mineralization from soil organic matter, N fixation, N dynamics in decaying wood, nitrification, and nitrate leaching) are poorly characterized, primarily because of a lack of knowledge rather than an oversight by model developers. It is remarkable how well the forest models do work in the absence of datamore » on some key processes. For those systems in which the poorly understood processes could cause major changes in N availability or productivity, the accuracy of model predictions should be examined. However, the development of N-cycling models represents a major step beyond the much simpler, classic conceptual models of forest nutrient cycling developed by early investigators. The new generation of computer models will surely improve as research reveals how key nutrient-cycling processes operate.« less
Sleeping of a Complex Brain Networks with Hierarchical Organization
NASA Astrophysics Data System (ADS)
Zhang, Ying-Yue; Yang, Qiu-Ying; Chen, Tian-Lun
2009-01-01
The dynamical behavior in the cortical brain network of macaque is studied by modeling each cortical area with a subnetwork of interacting excitable neurons. We characterize the system by studying how to perform the transition, which is now topology-dependent, from the active state to that with no activity. This could be a naive model for the wakening and sleeping of a brain-like system, i.e., a multi-component system with two different dynamical behavior.
Marín, Víctor H; Delgado, Luisa E; Bachmann, Pamela
2008-09-01
The use of brainstorming techniques for the generation of conceptual models, as the basis for the integrated management of physical-ecological-social systems (PHES-systems) is tested and discussed. The methodology is applied in the analysis of the Aysén fjord and watershed (Southern Chilean Coast). Results show that the proposed methods can be adequately used in management scenarios characterized by highly hierarchical, experts/non-experts membership.
Thermal barrier coating life-prediction model development
NASA Technical Reports Server (NTRS)
Strangman, T. E.; Neumann, J.; Liu, A.
1986-01-01
The program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant thermal barrier coating (TBC) systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY or (CoNiCrAlY) bond coating and an air-plasma-sprayed yttria partially stabilized zirconia insulative layer, is applied by both Chromalloy, Klock, and Union Carbide. The second type of TBS is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal. The second year of the program was focused on specimen procurement, TMC system characterization, nondestructive evaluation methods, life prediction model development, and TFE731 engine testing of thermal barrier coated blades. Materials testing is approaching completion. Thermomechanical characterization of the TBC systems, with toughness, and spalling strain tests, was completed. Thermochemical testing is approximately two-thirds complete. Preliminary materials life models for the bond coating oxidation and zirconia sintering failure modes were developed. Integration of these life models with airfoil component analysis methods is in progress. Testing of high pressure turbine blades coated with the program TBS systems is in progress in a TFE731 turbofan engine. Eddy current technology feasibility was established with respect to nondestructively measuring zirconia layer thickness of a TBC system.
Power-law modeling based on least-squares minimization criteria.
Hernández-Bermejo, B; Fairén, V; Sorribas, A
1999-10-01
The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.
DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean
2014-05-01
Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.
Sarkar, Sujit
2018-04-12
An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.
Initial Results from the Bloomsburg University Goniometer Laboratory
NASA Technical Reports Server (NTRS)
Shepard, M. K.
2002-01-01
The Bloomsburg University Goniometer Laboratory (B.U.G. Lab) consists of three systems for studying the photometric properties of samples. The primary system is an automated goniometer capable of measuring the entire bi-directional reflectance distribution function (BRDF) of samples. Secondary systems include a reflectance spectrometer and digital video camera with macro zoom lens for characterizing and documenting other physical properties of measured samples. Works completed or in progress include the characterization of the BRDF of calibration surfaces for the 2003 Mars Exploration Rovers (MER03), Martian analog soils including JSC-Mars-1, and tests of photometric models.
Color Reproduction System Based on Color Appearance Model and Gamut Mapping
2000-07-01
and Gamut Mapping DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the following report: TITLE: Input/Output...report: ADP011333 thru ADP011362 UNCLASSIFIED Color reproduction system based on color appearance model and gamut mapping Fang-Hsuan Cheng, Chih-Yuan...perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human
Interesting examples of supervised continuous variable systems
NASA Technical Reports Server (NTRS)
Chase, Christopher; Serrano, Joe; Ramadge, Peter
1990-01-01
The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.
Tiedeman, Claire; Hill, Mary C.
2007-01-01
When simulating natural and engineered groundwater flow and transport systems, one objective is to produce a model that accurately represents important aspects of the true system. However, using direct measurements of system characteristics, such as hydraulic conductivity, to construct a model often produces simulated values that poorly match observations of the system state, such as hydraulic heads, flows and concentrations (for example, Barth et al., 2001). This occurs because of inaccuracies in the direct measurements and because the measurements commonly characterize system properties at different scales from that of the model aspect to which they are applied. In these circumstances, the conservation of mass equations represented by flow and transport models can be used to test the applicability of the direct measurements, such as by comparing model simulated values to the system state observations. This comparison leads to calibrating the model, by adjusting the model construction and the system properties as represented by model parameter values, so that the model produces simulated values that reasonably match the observations.
Enhancing metaproteomics-The value of models and defined environmental microbial systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbst, Florian-Alexander; Lünsmann, Vanessa; Kjeldal, Henrik
2016-01-21
Metaproteomics - the large-scale characterization of the entire protein complement of environmental microbiota at a given point in time - added unique features and possibilities to study environmental microbial communities and to unravel these “black boxes”. New technical challenges arose which were not an issue for classical proteome analytics before and choosing the appropriate model system applicable to the research question can be difficult. Here, we reviewed different model systems for metaproteome analysis. Following a short introduction to microbial communities and systems, we discussed the most used systems ranging from technical systems over rhizospheric models to systems for the medicalmore » field. This includes acid mine drainage, anaerobic digesters, activated sludge, planted fixed bed reactors, gastrointestinal simulators and in vivo models. Model systems are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability or reliable protein extraction. The implementation of model systems can be considered as a step forward to better understand microbial responses and ecological distribution of member organisms. In the future, novel improvements are necessary to fully engage complex environmental systems.« less
Inverse approaches with lithologic information for a regional groundwater system in southwest Kansas
Tsou, Ming‐shu; Perkins, S.P.; Zhan, X.; Whittemore, Donald O.; Zheng, Lingyun
2006-01-01
Two practical approaches incorporating lithologic information for groundwater modeling calibration are presented to estimate distributed, cell-based hydraulic conductivity. The first approach is to estimate optimal hydraulic conductivities for geological materials by incorporating thickness distribution of materials into inverse modeling. In the second approach, residuals for the groundwater model solution are minimized according to a globalized Newton method with the aid of a Geographic Information System (GIS) to calculate a cell-wise distribution of hydraulic conductivity. Both approaches honor geologic data and were effective in characterizing the heterogeneity of a regional groundwater modeling system in southwest Kansas. ?? 2005 Elsevier Ltd All rights reserved.
Systems Engineering and Application of System Performance Modeling in SIM Lite Mission
NASA Technical Reports Server (NTRS)
Moshir, Mehrdad; Murphy, David W.; Milman, Mark H.; Meier, David L.
2010-01-01
The SIM Lite Astrometric Observatory will be the first space-based Michelson interferometer operating in the visible wavelength, with the ability to perform ultra-high precision astrometric measurements on distant celestial objects. SIM Lite data will address in a fundamental way questions such as characterization of Earth-mass planets around nearby stars. To accomplish these goals it is necessary to rely on a model-based systems engineering approach - much more so than most other space missions. This paper will describe in further detail the components of this end-to-end performance model, called "SIM-sim", and show how it has helped the systems engineering process.
Advances In High Temperature (Viscoelastoplastic) Material Modeling for Thermal Structural Analysis
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Saleeb, Atef F.
2005-01-01
Typical High Temperature Applications High Temperature Applications Demand High Performance Materials: 1) Complex Thermomechanical Loading; 2) Complex Material response requires Time-Dependent/Hereditary Models: Viscoelastic/Viscoplastic; and 3) Comprehensive Characterization (Tensile, Creep, Relaxation) for a variety of material systems.
ERIC Educational Resources Information Center
Lu, Xingjiang; Yao, Chen; Zheng, Jianmin
2013-01-01
This paper focuses on the training of undergraduate students' innovation ability. On top of the theoretical framework of the Quality Function Deployment (QFD), we propose a teaching quality management model. Based on this model, we establish a multilevel decomposition indicator system, which integrates innovation ability characterized by four…
The US EPA’s Human Exposure Model (HEM) is an integrated modeling system to estimate human exposure to chemicals in household consumer products. HEM consists of multiple modules, which may be run either together, or independently. The Source-to-Dose (S2D) module in HEM use...
Modeling population exposures to silver nanoparticles present in consumer products
NASA Astrophysics Data System (ADS)
Royce, Steven G.; Mukherjee, Dwaipayan; Cai, Ting; Xu, Shu S.; Alexander, Jocelyn A.; Mi, Zhongyuan; Calderon, Leonardo; Mainelis, Gediminas; Lee, KiBum; Lioy, Paul J.; Tetley, Teresa D.; Chung, Kian Fan; Zhang, Junfeng; Georgopoulos, Panos G.
2014-11-01
Exposures of the general population to manufactured nanoparticles (MNPs) are expected to keep rising due to increasing use of MNPs in common consumer products (PEN 2014). The present study focuses on characterizing ambient and indoor population exposures to silver MNPs (nAg). For situations where detailed, case-specific exposure-related data are not available, as in the present study, a novel tiered modeling system, Prioritization/Ranking of Toxic Exposures with GIS (geographic information system) Extension (PRoTEGE), has been developed: it employs a product life cycle analysis (LCA) approach coupled with basic human life stage analysis (LSA) to characterize potential exposures to chemicals of current and emerging concern. The PRoTEGE system has been implemented for ambient and indoor environments, utilizing available MNP production, usage, and properties databases, along with laboratory measurements of potential personal exposures from consumer spray products containing nAg. Modeling of environmental and microenvironmental levels of MNPs employs probabilistic material flow analysis combined with product LCA to account for releases during manufacturing, transport, usage, disposal, etc. Human exposure and dose characterization further employ screening microenvironmental modeling and intake fraction methods combined with LSA for potentially exposed populations, to assess differences associated with gender, age, and demographics. Population distributions of intakes, estimated using the PRoTEGE framework, are consistent with published individual-based intake estimates, demonstrating that PRoTEGE is capable of capturing realistic exposure scenarios for the US population. Distributions of intakes are also used to calculate biologically relevant population distributions of uptakes and target tissue doses through human airway dosimetry modeling that takes into account product MNP size distributions and age-relevant physiological parameters.
A Statistical Graphical Model of the California Reservoir System
NASA Astrophysics Data System (ADS)
Taeb, A.; Reager, J. T.; Turmon, M.; Chandrasekaran, V.
2017-11-01
The recent California drought has highlighted the potential vulnerability of the state's water management infrastructure to multiyear dry intervals. Due to the high complexity of the network, dynamic storage changes in California reservoirs on a state-wide scale have previously been difficult to model using either traditional statistical or physical approaches. Indeed, although there is a significant line of research on exploring models for single (or a small number of) reservoirs, these approaches are not amenable to a system-wide modeling of the California reservoir network due to the spatial and hydrological heterogeneities of the system. In this work, we develop a state-wide statistical graphical model to characterize the dependencies among a collection of 55 major California reservoirs across the state; this model is defined with respect to a graph in which the nodes index reservoirs and the edges specify the relationships or dependencies between reservoirs. We obtain and validate this model in a data-driven manner based on reservoir volumes over the period 2003-2016. A key feature of our framework is a quantification of the effects of external phenomena that influence the entire reservoir network. We further characterize the degree to which physical factors (e.g., state-wide Palmer Drought Severity Index (PDSI), average temperature, snow pack) and economic factors (e.g., consumer price index, number of agricultural workers) explain these external influences. As a consequence of this analysis, we obtain a system-wide health diagnosis of the reservoir network as a function of PDSI.
Alternative model for administration and analysis of research-based assessments
NASA Astrophysics Data System (ADS)
Wilcox, Bethany R.; Zwickl, Benjamin M.; Hobbs, Robert D.; Aiken, John M.; Welch, Nathan M.; Lewandowski, H. J.
2016-06-01
Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption of these assessments within the broader community of physics instructors. Within this historical model, assessment developers create high quality, validated assessments, make them available for a wide range of instructors to use, and provide minimal (if any) support to assist with administration or analysis of the results. Here, we present and discuss an alternative model for assessment dissemination, which is characterized by centralized data collection and analysis. This model provides a greater degree of support for both researchers and instructors in order to more explicitly support adoption of research-based assessments. Specifically, we describe our experiences developing a centralized, automated system for an attitudinal assessment we previously created to examine students' epistemologies and expectations about experimental physics. This system provides a proof of concept that we use to discuss the advantages associated with centralized administration and data collection for research-based assessments in PER. We also discuss the challenges that we encountered while developing, maintaining, and automating this system. Ultimately, we argue that centralized administration and data collection for standardized assessments is a viable and potentially advantageous alternative to the default model characterized by decentralized administration and analysis. Moreover, with the help of online administration and automation, this model can support the long-term sustainability of centralized assessment systems.
Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, Heather; Liou, J.-C.; Anz-Meador, Phillip; Sorge, Marlon; Opiela, John; Fitz-Coy, Norman; Huynh, Tom; Krisko, Paula
2017-01-01
Existing DOD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.
Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, Heather; Liou, J.-C.; Krisko, Paula; Opiela, John; Fitz-Coy, Norman; Sorge, Marlon; Huynh, Tom
2017-01-01
Existing DoD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, Steven M.
2001-01-01
Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.
Modelling Parameters Characterizing Selected Water Supply Systems in Lower Silesia Province
NASA Astrophysics Data System (ADS)
Nowogoński, Ireneusz; Ogiołda, Ewa
2017-12-01
The work presents issues of modelling water supply systems in the context of basic parameters characterizing their operation. In addition to typical parameters, such as water pressure and flow rate, assessing the age of the water is important, as a parameter of assessing the quality of the distributed medium. The analysis was based on two facilities, including one with a diverse spectrum of consumers, including residential housing and industry. The carried out simulations indicate the possibility of the occurrence of water quality degradation as a result of excessively long periods of storage in the water supply network. Also important is the influence of the irregularity of water use, especially in the case of supplying various kinds of consumers (in the analysed case - mining companies).
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
Singh, Aman P; Maass, Katie F; Betts, Alison M; Wittrup, K Dane; Kulkarni, Chethana; King, Lindsay E; Khot, Antari; Shah, Dhaval K
2016-07-01
A mathematical model capable of accurately characterizing intracellular disposition of ADCs is essential for a priori predicting unconjugated drug concentrations inside the tumor. Towards this goal, the objectives of this manuscript were to: (1) evolve previously published cellular disposition model of ADC with more intracellular details to characterize the disposition of T-DM1 in different HER2 expressing cell lines, (2) integrate the improved cellular model with the ADC tumor disposition model to a priori predict DM1 concentrations in a preclinical tumor model, and (3) identify prominent pathways and sensitive parameters associated with intracellular activation of ADCs. The cellular disposition model was augmented by incorporating intracellular ADC degradation and passive diffusion of unconjugated drug across tumor cells. Different biomeasures and chemomeasures for T-DM1, quantified in the companion manuscript, were incorporated into the modified model of ADC to characterize in vitro pharmacokinetics of T-DM1 in three HER2+ cell lines. When the cellular model was integrated with the tumor disposition model, the model was able to a priori predict tumor DM1 concentrations in xenograft mice. Pathway analysis suggested different contribution of antigen-mediated and passive diffusion pathways for intracellular unconjugated drug exposure between in vitro and in vivo systems. Global and local sensitivity analyses revealed that non-specific deconjugation and passive diffusion of the drug across tumor cell membrane are key parameters for drug exposure inside a cell. Finally, a systems pharmacokinetic model for intracellular processing of ADCs has been proposed to highlight our current understanding about the determinants of ADC activation inside a cell.
Identification of Low Order Equivalent System Models From Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.
Bipartite charge fluctuations in one-dimensional Z2 superconductors and insulators
NASA Astrophysics Data System (ADS)
Herviou, Loïc; Mora, Christophe; Le Hur, Karyn
2017-09-01
Bipartite charge fluctuations (BCFs) have been introduced to provide an experimental indication of many-body entanglement. They have proved themselves to be a very efficient and useful tool to characterize quantum phase transitions in a variety of quantum models conserving the total number of particles (or magnetization for spin systems) and can be measured experimentally. We study the BCFs in generic one-dimensional Z2 (topological) models including the Kitaev superconducting wire model, the Ising chain, or various topological insulators such as the Su-Schrieffer-Heeger model. The considered charge (either the fermionic number or the relative density) is no longer conserved, leading to macroscopic fluctuations of the number of particles. We demonstrate that at phase transitions characterized by a linear dispersion, the BCFs probe the change in a winding number that allows one to pinpoint the transition and corresponds to the topological invariant for standard models. Additionally, we prove that a subdominant logarithmic contribution is still present at the exact critical point. Its quantized coefficient is universal and characterizes the critical model. Results are extended to the Rashba topological nanowires and to the X Y Z model.
Results from a workshop on research needs for modeling aquifer thermal energy storage systems
NASA Astrophysics Data System (ADS)
Drost, M. K.
1990-08-01
A workshop an aquifer thermal energy storage (ATES) system modeling was conducted by Pacific Northwest Laboratory (PNL). The goal of the workshop was to develop a list of high priority research activities that would facilitate the commercial success of ATES. During the workshop, participants reviewed currently available modeling tools for ATES systems and produced a list of significant issues related to modeling ATES systems. Participants assigned a priority to each issue on the list by voting and developed a list of research needs for each of four high-priority research areas; the need for a feasibility study model, the need for engineering design models, the need for aquifer characterization, and the need for an economic model. The workshop participants concluded that ATES commercialization can be accelerated by aggressive development of ATES modeling tools and made specific recommendations for that development.
NASA Technical Reports Server (NTRS)
Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.
1993-01-01
We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.
Hospital Systems, Convenient Care Strategies, and Healthcare Reform.
Kaissi, Amer; Shay, Patrick; Roscoe, Christina
2016-01-01
Retail clinics (RCs) and urgent care centers (UCCs) are convenient care models that emerged on the healthcare scene in the past 10 to 15 years. Characterized as disruptive innovations, these models of healthcare delivery seem to follow a slightly different path from each other. Hospital systems, the very organizations that were originally threatened by convenient care models, are developing them and partnering with existing models. We posit that legislative changes such as the Affordable Care Act created challenges for hospital systems that accelerated their adoption of these models. In this study, we analyze 117 hospital systems in six states and report on their convenient care strategies. Our data suggest that UCCs are more prevalent than RCs among hospital systems, and that large and unexplained state-by-state variations exist in the adoption of these strategies. We also postulate about the future role of hospital systems in leading these innovations.
NASA Astrophysics Data System (ADS)
El-Haddad, Mohamed T.; Tao, Yuankai K.
2018-02-01
Design of optical imaging systems requires careful balancing of lens aberrations to optimize the point-spread function (PSF) and minimize field distortions. Aberrations and distortions are a result of both lens geometry and glass material. While most lens manufacturers provide optical models to facilitate system-level simulation, these models are often not reflective of true system performance because of manufacturing tolerances. Optical design can be further confounded when achromatic or proprietary lenses are employed. Achromats are ubiquitous in systems that utilize broadband sources due to their superior performance in balancing chromatic aberrations. Similarly, proprietary lenses may be custom-designed for optimal performance, but lens models are generally not available. Optical coherence tomography (OCT) provides non-contact, depth-resolved imaging with high axial resolution and sensitivity. OCT has been previously used to measure the refractive index of unknown materials. In a homogenous sample, the group refractive index is obtained as the ratio between the measured optical and geometric thicknesses of the sample. In heterogenous samples, a method called focus-tracking (FT) quantifies the effect of focal shift introduced by the sample. This enables simultaneous measurement of the thickness and refractive index of intermediate sample layers. Here, we extend the mathematical framework of FT to spherical surfaces, and describe a method based on OCT and FT for full characterization of lens geometry and refractive index. Finally, we validate our characterization method on commercially available singlet and doublet lenses.
Development of Models to Simulate Tracer Tests for Characterization of Enhanced Geothermal Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Mark D.; Reimus, Paul; Vermeul, Vincent R.
2013-05-01
A recent report found that power and heat produced from enhanced (or engineered) geothermal systems (EGSs) could have a major impact on the U.S energy production capability while having a minimal impact on the environment. EGS resources differ from high-grade hydrothermal resources in that they lack sufficient temperature distribution, permeability/porosity, fluid saturation, or recharge of reservoir fluids. Therefore, quantitative characterization of temperature distributions and the surface area available for heat transfer in EGS is necessary for the design and commercial development of the geothermal energy of a potential EGS site. The goal of this project is to provide integrated tracermore » and tracer interpretation tools to facilitate this characterization. This project was initially focused on tracer development with the application of perfluorinated tracer (PFT) compounds, non-reactive tracers used in numerous applications from atmospheric transport to underground leak detection, to geothermal systems, and evaluation of encapsulated PFTs that would release tracers at targeted reservoir temperatures. After the 2011 midyear review and subsequent discussions with the U.S. Department of Energy Geothermal Technology Program (GTP), emphasis was shifted to interpretive tool development, testing, and validation. Subsurface modeling capabilities are an important component of this project for both the design of suitable tracers and the interpretation of data from in situ tracer tests, be they single- or multi-well tests. The purpose of this report is to describe the results of the tracer and model development for simulating and conducting tracer tests for characterizing EGS parameters.« less
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
Wave-Optics Analysis of Pupil Imaging
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Bos, Brent J.
2006-01-01
Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.
Wireless Channel Characterization in the Airport Surface Environment
NASA Technical Reports Server (NTRS)
Neville, Joshua T.
2004-01-01
Given the anticipated increase in air traffic in the coming years, modernization of the National Airspace System (NAS) is a necessity. Part of this modernization effort will include updating current communication, navigation, and surveillance (CNS) systems to deal with the increased traffic as well as developing advanced CNS technologies for the systems. An example of such technology is the integrated CNS (ICNS) network being developed by the Advanced CNS Architecture and Systems Technology (ACAST) group for use in the airport surface environment. The ICNS network would be used to convey voice/data between users in a secure and reliable manner. The current surface system only supports voice and does so through an obsolete physical infrastructure. The old system is vulnerable to outages and costly to maintain. The proposed ICNS network will include a wireless radio link. To ensure optimal performance, a thorough and accurate characterization of the channel across which the link would operate is necessary. The channel is the path the signal takes from the transmitter to the receiver and is prone to various forms of interference. Channel characterization involves a combination of analysis, simulation, and measurement. My work this summer was divided into four tasks. The first task required compiling and reviewing reference material that dealt with the characterization and modeling of aeronautical channels. The second task involved developing a systematic approach that could be used to group airports into classes, e.g. small airfields, medium airports, large open airports, large cluttered airports, etc. The third task consisted of implementing computer simulations of existing channel models. The fourth task entailed measuring possible interference sources in the airport surface environment via a spectrum analyzer.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
NASA Astrophysics Data System (ADS)
Sierra, O.; Parrado, G.; Cañón, Y.; Porras, A.; Alonso, D.; Herrera, D. C.; Peña, M.; Orozco, J.
2016-07-01
This paper presents the progress made by the Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey (SGC in its Spanish acronym), towards the characterization of its gamma spectrometric systems for Instrumental Neutron Activation Analysis (INAA), with the aim of introducing corrections to the measurements by variations in sample geometry. Characterization includes the empirical determination of the interaction point of gamma radiation inside the Germanium crystal, through the application of a linear model and the use of a fast Monte Carlo N-Particle (MCNP) software to estimate correction factors for differences in counting efficiency that arise from variations in sample density between samples and standards.
Remote Thermal IR Spectroscopy of our Solar System
NASA Technical Reports Server (NTRS)
Kostiuk, Theodor; Hewagama, Tilak; Goldstein, Jeffrey; Livengood, Timothy; Fast, Kelly
1999-01-01
Indirect methods to detect extrasolar planets have been successful in identifying a number of stars with companion planets. No direct detection of an extrasolar planet has yet been reported. Spectroscopy in the thermal infrared region provides a potentially powerful approach to detection and characterization of planets and planetary systems. We can use knowledge of our own solar system, its planets and their atmospheres to model spectral characteristics of planets around other stars. Spectra derived from modeling our own solar system seen from an extrasolar perspective can be used to constrain detection strategies, identification of planetary class (terrestrial vs. gaseous) and retrieval of chemical, thermal and dynamical information. Emission from planets in our solar system peaks in the thermal infrared region, approximately 10 - 30 microns, substantially displaced from the maximum of the much brighter solar emission in the visible near 0.5 microns. This fact provides a relatively good contrast ratio to discriminate between stellar (solar) and planetary emission and optimize the delectability of planetary spectra. Important molecular constituents in planetary atmospheres have rotational-vibrational spectra in the thermal infrared region. Spectra from these molecules have been well characterized in the laboratory and studied in the atmospheres of solar system planets from ground-based and space platforms. The best example of such measurements are the studies with Fourier transform spectrometers, the Infrared Interferometer Spectrometers (IRIS), from spacecraft: Earth observed from NIMBUS 8, Mars observed from Mariner 9, and the outer planets observed from Voyager spacecraft. An Earth-like planet is characterized by atmospheric spectra of ozone, carbon dioxide, and water. Terrestrial planets have oxidizing atmospheres which are easily distinguished from reducing atmospheres of gaseous giant planets which lack oxygen-bearing species and are characterized by spectra containing hydrocarbons such as methane and ethane. Spectroscopic information on extrasolar planets thus can permit their classification. Spectra and spectral lines contain information on the temperature structure of the atmosphere. Line and band spectra can be used to identify the molecular constituents and retrieve species abundances, thereby classifying and characterizing the planet. At high enough spectral resolution characteristic planetary atmospheric dynamics and unique phenomena such as failure of local thermodynamic equilibrium can be identified. Dynamically induced effects such as planetary rotation and orbital velocity shift and change the shape of spectral features and must be modeled in detailed spectral studies. We will use our knowledge of the compositional, thermal and dynamical characteristics of planetary atmospheres in our own solar system to model spectra observed remotely on similar planets in extrasolar planetary systems. We will use a detailed radiative transfer and beam integration program developed for the modeling and interpretation of thermal infrared spectra measured from nearby planet planets to generate models of an extra-solar "Earth" and "Jupiter". From these models we will show how key spectral features distinguish between terrestrial and gaseous planets, what information can be obtained with different spectral resolution, what spectral features can be used to search for conditions for biogenic activity, and how dynamics and distance modify the observed spectra. We also will look at unique planetary phenomena such as atmospheric lasing and discuss their utility as probes for detection and identification of planets. Results of such studies will provide information to constrain design for instrumentation needed to directly detect extrasolar planets.
Sun, Jianxin; Moore, Lee; Xue, Wei; Kim, James; Zborowski, Maciej; Chalmers, Jeffrey J
2018-05-01
Magnetic separation of cells has been, and continues to be, widely used in a variety of applications, ranging from healthcare diagnostics to detection of food contamination. Typically, these technologies require cells labeled with antibody magnetic particle conjugate and a high magnetic energy gradient created in the flow containing the labeled cells (i.e., a column packed with magnetically inducible material), or dense packing of magnetic particles next to the flow cell. Such designs, while creating high magnetic energy gradients, are not amenable to easy, highly detailed, mathematic characterization. Our laboratories have been characterizing and developing analysis and separation technology that can be used on intrinsically magnetic cells or spores which are typically orders of magnitude weaker than typically immunomagnetically labeled cells. One such separation system is magnetic deposition microscopy (MDM) which not only separates cells, but deposits them in specific locations on slides for further microscopic analysis. In this study, the MDM system has been further characterized, using finite element and computational fluid mechanics software, and separation performance predicted, using a model which combines: 1) the distribution of the intrinsic magnetophoretic mobility of the cells (spores); 2) the fluid flow within the separation device; and 3) accurate maps of the values of the magnetic field (max 2.27 T), and magnetic energy gradient (max of 4.41 T 2 /mm) within the system. Guided by this model, experimental studies indicated that greater than 95% of the intrinsically magnetic Bacillus spores can be separated with the MDM system. Further, this model allows analysis of cell trajectories which can assist in the design of higher throughput systems. © 2018 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Matolak, D. W.; Apaza, Rafael; Foore, Lawrence R.
2006-01-01
We describe a recently completed wideband wireless channel characterization project for the 5 GHz Microwave Landing System (MLS) extension band, for airport surface areas. This work included mobile measurements at large and small airports, and fixed point-to-point measurements. Mobile measurements were made via transmission from the air traffic control tower (ATCT), or from an airport field site (AFS), to a receiving ground vehicle on the airport surface. The point-to-point measurements were between ATCT and AFSs. Detailed statistical channel models were developed from all these measurements. Measured quantities include propagation path loss and power delay profiles, from which we obtain delay spreads, frequency domain correlation (coherence bandwidths), fading amplitude statistics, and channel parameter correlations. In this paper we review the project motivation, measurement coordination, and illustrate measurement results. Example channel modeling results for several propagation conditions are also provided, highlighting new findings.
Characterization of Orbital Debris Via Hyper-Velocity Ground-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, Heather
2015-01-01
To replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoD and NASA breakup models. DebriSat is intended to be representative of modern LEO satellites.Major design decisions were reviewed and approved by Aerospace subject matter experts from different disciplines. DebriSat includes 7 major subsystems. Attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. A key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), supporting the development of the DoD and NASA satellite breakup models was conducted at AEDC in 1992 .Breakup models based on SOCIT have supported many applications and matched on-orbit events reasonably well over the years.
Principles for the dynamic maintenance of cortical polarity
Marco, Eugenio; Wedlich-Soldner, Roland; Li, Rong; Altschuler, Steven J.; Wu, Lani F.
2007-01-01
Summary Diverse cell types require the ability to dynamically maintain polarized membrane protein distributions through balancing transport and diffusion. However, design principles underlying dynamically maintained cortical polarity are not well understood. Here we constructed a mathematical model for characterizing the morphology of dynamically polarized protein distributions. We developed analytical approaches for measuring all model parameters from single-cell experiments. We applied our methods to a well-characterized system for studying polarized membrane proteins: budding yeast cells expressing activated Cdc42. We found that balanced diffusion and colocalized transport to and from the plasma membrane were sufficient for accurately describing polarization morphologies. Surprisingly, the model predicts that polarized regions are defined with a precision that is nearly optimal for measured transport rates, and that polarity can be dynamically stabilized through positive feedback with directed transport. Our approach provides a step towards understanding how biological systems shape spatially precise, unambiguous cortical polarity domains using dynamic processes. PMID:17448998
A methodology to enhance electromagnetic compatibility in joint military operations
NASA Astrophysics Data System (ADS)
Buckellew, William R.
The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.
Projecting state-level air pollutant emissions using an integrated assessment model: GCAM-USA.
Integrated Assessment Models (IAMs) characterize the interactions among human and earth systems. IAMs typically have been applied to investigate future energy, land use, and emission pathways at global to continental scales. Recent directions in IAM development include enhanced t...
Characterizing the Influence of Hemispheric Transport on Regional Air Pollution
Expansion of the coupled WRF-CMAQ modeling system to hemispheric scales is pursued to enable the development of a robust modeling framework in which the interactions between atmospheric processes occurring at various spatial and temporal scales can be examined in a consistent man...
Bacteriophage: A Model System for Active Learning.
ERIC Educational Resources Information Center
Luciano, Carl S.; Young, Matthew W.; Patterson, Robin R.
2002-01-01
Describes a student-centered laboratory course in which student teams select phage from sewage samples and characterize the phage in a semester-long project that models real-life scientific research. Results of student evaluations indicate a high level of satisfaction with the course. (Author/MM)
NASA Astrophysics Data System (ADS)
Gallas, Michelle R.; Gallas, Marcia R.; Gallas, Jason A. C.
2014-10-01
We study complex oscillations generated by the de Pillis-Radunskaya model of cancer growth, a model including interactions between tumor cells, healthy cells, and activated immune system cells. We report a wide-ranging systematic numerical classification of the oscillatory states and of their relative abundance. The dynamical states of the cell populations are characterized here by two independent and complementary types of stability diagrams: Lyapunov and isospike diagrams. The model is found to display stability phases organized regularly in old and new ways: Apart from the familiar spirals of stability, it displays exceptionally long zig-zag networks and intermixed cascades of two- and three-doubling flanked stability islands previously detected only in feedback systems with delay. In addition, we also characterize the interplay between continuous spike-adding and spike-doubling mechanisms responsible for the unbounded complexification of periodic wave patterns. This article is dedicated to Prof. Hans Jürgen Herrmann on the occasion of his 60th birthday.
NASA Technical Reports Server (NTRS)
Lynn, Keith C.; Commo, Sean A.; Johnson, Thomas H.; Parker, Peter A,
2011-01-01
Wind tunnel research at NASA Langley Research Center s 31-inch Mach 10 hypersonic facility utilized a 5-component force balance, which provided a pressurized flow-thru capability to the test article. The goal of the research was to determine the interaction effects between the free-stream flow and the exit flow from the reaction control system on the Mars Science Laboratory aeroshell during planetary entry. In the wind tunnel, the balance was exposed to aerodynamic forces and moments, steady-state and transient thermal gradients, and various internal balance cavity pressures. Historically, these effects on force measurement accuracy have not been fully characterized due to limitations in the calibration apparatus. A statistically designed experiment was developed to adequately characterize the behavior of the balance over the expected wind tunnel operating ranges (forces/moments, temperatures, and pressures). The experimental design was based on a Taylor-series expansion in the seven factors for the mathematical models. Model inversion was required to calculate the aerodynamic forces and moments as a function of the strain-gage readings. Details regarding transducer on-board compensation techniques, experimental design development, mathematical modeling, and wind tunnel data reduction are included in this paper.
Leith, S.D.; Reddy, M.M.; Irez, W.F.; Heymans, M.J.
1996-01-01
The pore structure of Salem limestone is investigated, and conclusions regarding the effect of the pore geometry on modeling moisture and contaminant transport are discussed based on thin section petrography, scanning electron microscopy, mercury intrusion porosimetry, and nitrogen adsorption analyses. These investigations are compared to and shown to compliment permeability and capillary pressure measurements for this common building stone. Salem limestone exhibits a bimodal pore size distribution in which the larger pores provide routes for convective mass transfer of contaminants into the material and the smaller pores lead to high surface area adsorption and reaction sites. Relative permeability and capillary pressure measurements of the air/water system indicate that Salem limestone exhibits high capillarity end low effective permeability to water. Based on stone characterization, aqueous diffusion and convection are believed to be the primary transport mechanisms for pollutants in this stone. The extent of contaminant accumulation in the stone depends on the mechanism of partitioning between the aqueous and solid phases. The described characterization techniques and modeling approach can be applied to many systems of interest such as acidic damage to limestone, mass transfer of contaminants in concrete and other porous building materials, and modeling pollutant transport in subsurface moisture zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihaescu, Tatiana, E-mail: mihaescu92tatiana@gmail.com; Isar, Aurelian
We describe the evolution of the quantum entanglement of an open system consisting of two bosonic modes interacting with a common thermal environment, described by two different models. The initial state of the system is taken of Gaussian form. In the case of a thermal bath, characterized by temperature and dissipation constant which correspond to an asymptotic Gibbs state of the system, we show that for a zero temperature of the thermal bath an initial entangled Gaussian state remains entangled for all finite times. For an entangled initial squeezed thermal state, the phenomenon of entanglement sudden death takes place andmore » we calculate the survival time of entanglement. For the second model of the environment, corresponding to a non-Gibbs asymptotic state, we study the possibility of generating entanglement. We show that the generation of the entanglement between two uncoupled bosonic modes is possible only for definite values of the temperature and dissipation constant, which characterize the thermal environment.« less
13C-based metabolic flux analysis: fundamentals and practice.
Yang, Tae Hoon
2013-01-01
Isotope-based metabolic flux analysis is one of the emerging technologies applied to system level metabolic phenotype characterization in metabolic engineering. Among the developed approaches, (13)C-based metabolic flux analysis has been established as a standard tool and has been widely applied to quantitative pathway characterization of diverse biological systems. To implement (13)C-based metabolic flux analysis in practice, comprehending the underlying mathematical and computational modeling fundamentals is of importance along with carefully conducted experiments and analytical measurements. Such knowledge is also crucial when designing (13)C-labeling experiments and properly acquiring key data sets essential for in vivo flux analysis implementation. In this regard, the modeling fundamentals of (13)C-labeling systems and analytical data processing are the main topics we will deal with in this chapter. Along with this, the relevant numerical optimization techniques are addressed to help implementation of the entire computational procedures aiming at (13)C-based metabolic flux analysis in vivo.
Constraints on Fluctuations in Sparsely Characterized Biological Systems.
Hilfinger, Andreas; Norman, Thomas M; Vinnicombe, Glenn; Paulsson, Johan
2016-02-05
Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.
Constraints on Fluctuations in Sparsely Characterized Biological Systems
NASA Astrophysics Data System (ADS)
Hilfinger, Andreas; Norman, Thomas M.; Vinnicombe, Glenn; Paulsson, Johan
2016-02-01
Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.
Automatic labeling and characterization of objects using artificial neural networks
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms, i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
ERIC Educational Resources Information Center
Castro, Edgar Oscar
2013-01-01
A 30-year contribution of the Space Shuttle Program is the evolution of NASA's social actions through organizational learning. This study investigated how NASA learned over time following two catastrophic accidents. Schwandt's (1997) organizational Learning System Model (OLSM) characterized the learning in this High Reliability…
Synthesis and Characterization of Ionically Crosslinked Elastomers
2015-05-12
SECURITY CLASSIFICATION OF: In this research poly(n-butyl acrylate) (PBA) elastomers were investigated as model systems to study the thermomechanical...subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO...Ionically Crosslinked Elastomers Report Title In this research poly(n-butyl acrylate) (PBA) elastomers were investigated as model systems to study the
Activity-Based Protein Profiling of Microbes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadler, Natalie C.; Wright, Aaron T.
Activity-Based Protein Profiling (ABPP) in conjunction with multimodal characterization techniques has yielded impactful findings in microbiology, particularly in pathogen, bioenergy, drug discovery, and environmental research. Using small molecule chemical probes that react irreversibly with specific proteins or protein families in complex systems has provided insights in enzyme functions in central metabolic pathways, drug-protein interactions, and regulatory protein redox, for systems ranging from photoautotrophic cyanobacteria to mycobacteria, and combining live cell or cell extract ABPP with proteomics, molecular biology, modeling, and other techniques has greatly expanded our understanding of these systems. New opportunities for application of ABPP to microbial systems include:more » enhancing protein annotation, characterizing protein activities in myriad environments, and reveal signal transduction and regulatory mechanisms in microbial systems.« less
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Balakrishna, S.; Kilgore, W. Allen
1995-01-01
A state-of-the-art, computerized mode protection and dynamic response monitoring system has been developed for the NASA Langley Research Center National Transonic Facility (NTF). This report describes the development of the model protection and shutdown system (MPSS). A technical description of the system is given along with discussions on operation and capabilities of the system. Applications of the system to vibration problems are presented to demonstrate the system capabilities, typical applications, versatility, and investment research return derived from the system to date. The system was custom designed for the NTF but can be used at other facilities or for other dynamic measurement/diagnostic applications. Potential commercial uses of the system are described. System capability has been demonstrated for forced response testing and for characterizing and quantifying bias errors for onboard inertial model attitude measurement devices. The system is installed in the NTF control room and has been used successfully for monitoring, recording and analyzing the dynamic response of several model systems tested in the NTF.
A modeling framework for exposing risks in complex systems.
Sharit, J
2000-08-01
This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.
Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...
Exploring Non-Thermal Radiofrequency Bioeffects for Novel Military Applications
2006-11-30
catecholamine release, using cultured adrenal chromaffin cells as an i,i vitro model system, and on skeletal muscle contraction , using intact skeletal...characterization and construction of a waveguide-based exposure system for monitoring skeletal muscle contraction during exposure to 0.75-1 GHz RF
Concept and development of an orthotropic FE model of the proximal femur.
Wirtz, Dieter Christian; Pandorf, Thomas; Portheine, Frank; Radermacher, Klaus; Schiffers, Norbert; Prescher, Andreas; Weichert, Dieter; Niethard, Fritz Uwe
2003-02-01
In contrast to many isotropic finite-element (FE) models of the femur in literature, it was the object of our study to develop an orthotropic FE "model femur" to realistically simulate three-dimensional bone remodelling. The three-dimensional geometry of the proximal femur was reconstructed by CT scans of a pair of cadaveric femurs at equal distances of 2mm. These three-dimensional CT models were implemented into an FE simulation tool. Well-known "density-determined" bony material properties (Young's modulus; Poisson's ratio; ultimate strength in pressure, tension and torsion; shear modulus) were assigned to each FE of the same "CT-density-characterized" volumetric group. In order to fix the principal directions of stiffness in FE areas with the same "density characterization", the cadaveric femurs were cut in 2mm slices in frontal (left femur) and sagittal plane (right femur). Each femoral slice was scanned into a computer-based image processing system. On these images, the principal directions of stiffness of cancellous and cortical bone were determined manually using the orientation of the trabecular structures and the Haversian system. Finally, these geometric data were matched with the "CT-density characterized" three-dimensional femur model. In addition, the time and density-dependent adaptive behaviour of bone remodelling was taken into account by implementation of Carter's criterion. In the constructed "model femur", each FE is characterized by the principal directions of the stiffness and the "CT-density-determined" material properties of cortical and cancellous bone. Thus, on the basis of anatomic data a three-dimensional FE simulation reference model of the proximal femur was realized considering orthotropic conditions of bone behaviour. With the orthotropic "model femur", the fundamental basis has been formed to realize realistic simulations of the dynamical processes of bone remodelling under different loading conditions or operative procedures (osteotomies, total hip replacements, etc).
Shear rheological characterization of gel healing response and construction of rheo-PIV system
NASA Astrophysics Data System (ADS)
Bawiskar, Abhishek D.
Thermo-reversible gels are solvent-filled 3D networks of polymer chains interconnected by physical (transient) crosslinks. On applying a high shear stress, the crosslinks are broken and these gels show a typical stress-strain behavior due to cohesive fracture of the gel. When heated above a critical temperature and cooled back to room temperature, all the crosslinks are re-formed. Interestingly, partial to full recovery of broken crosslinks is also observed by simply letting the gel stand at room temperature. In this study, the fracture and healing behavior of a model acrylic triblock copolymer gel has been characterized by shear rheometry. A mathematical model has also been proposed to better understand the mechanics at the molecular level and predict the healing time of a system. A rheo-PIV system was built as part of the project, to observe and confirm the bulk healing process in situ. Spontaneous self-healing behavior has immense potential in controlled drug delivery systems, coatings, food and various other applications.
Bioinspired sensory systems for local flow characterization
NASA Astrophysics Data System (ADS)
Colvert, Brendan; Chen, Kevin; Kanso, Eva
2016-11-01
Empirical evidence suggests that many aquatic organisms sense differential hydrodynamic signals.This sensory information is decoded to extract relevant flow properties. This task is challenging because it relies on local and partial measurements, whereas classical flow characterization methods depend on an external observer to reconstruct global flow fields. Here, we introduce a mathematical model in which a bioinspired sensory array measuring differences in local flow velocities characterizes the flow type and intensity. We linearize the flow field around the sensory array and express the velocity gradient tensor in terms of frame-independent parameters. We develop decoding algorithms that allow the sensory system to characterize the local flow and discuss the conditions under which this is possible. We apply this framework to the canonical problem of a circular cylinder in uniform flow, finding excellent agreement between sensed and actual properties. Our results imply that combining suitable velocity sensors with physics-based methods for decoding sensory measurements leads to a powerful approach for understanding and developing underwater sensory systems.
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, Anja, E-mail: hartmann@ipk-gatersleben.de; Schreiber, Falk; Martin-Luther-University Halle-Wittenberg, Halle
The characterization of biological systems with respect to their behavior and functionality based on versatile biochemical interactions is a major challenge. To understand these complex mechanisms at systems level modeling approaches are investigated. Different modeling formalisms allow metabolic models to be analyzed depending on the question to be solved, the biochemical knowledge and the availability of experimental data. Here, we describe a method for an integrative analysis of the structure and dynamics represented by qualitative and quantitative metabolic models. Using various formalisms, the metabolic model is analyzed from different perspectives. Determined structural and dynamic properties are visualized in the contextmore » of the metabolic model. Interaction techniques allow the exploration and visual analysis thereby leading to a broader understanding of the behavior and functionality of the underlying biological system. The System Biology Metabolic Model Framework (SBM{sup 2} – Framework) implements the developed method and, as an example, is applied for the integrative analysis of the crop plant potato.« less
Characterization of the ITER model negative ion source during long pulse operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemsworth, R.S.; Boilson, D.; Crowley, B.
2006-03-15
It is foreseen to operate the neutral beam system of the International Thermonuclear Experimental Reactor (ITER) for pulse lengths extending up to 1 h. The performance of the KAMABOKO III negative ion source, which is a model of the source designed for ITER, is being studied on the MANTIS test bed at Cadarache. This article reports the latest results from the characterization of the ion source, in particular electron energy distribution measurements and the comparison between positive ion and negative ion extraction from the source.
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
Yin, Anyue; Yamada, Akihiro; Stam, Wiro B; van Hasselt, Johan G C; van der Graaf, Piet H
2018-06-02
Development of combination therapies has received significant interest in recent years. Previously a two-receptor one-transducer (2R-1T) model was proposed to characterize drug interactions with two receptors that lead to the same phenotypic response through a common transducer pathway. We applied, for the first time, the 2R-1T model to characterize the interaction of noradrenaline and arginine-vasopressin on vasoconstriction, and performed inter-species scaling to humans using this mechanism-based model. Contractile data was obtained from in vitro rat small mesenteric arteries after exposure to single or combined challenges of noradrenaline and arginine-vasopressin with or without pre-treatment with the irreversible α-adrenoceptor antagonist, phenoxybenzamine. Data was analysed using the 2R-1T model to characterize the observed exposure-response relationships and drug-drug interaction. The model was then scaled to humans by accounting for differences in receptor density. With receptor affinities set to literature values, the 2R-1T model satisfactorily characterized the interaction between noradrenaline and arginine-vasopressin in rat small mesenteric arteries (relative standard error ≤ 20%), as well as the effect of phenoxybenzamine. Furthermore, after scaling the model to human vascular tissue, the model also adequately predicted the interaction between both agents on human renal arteries. The 2R-1T model can be of relevance to quantitatively characterize the interaction between two drugs that interact via different receptors and a common transducer pathway. Its mechanistic properties are valuable for scaling the model across species. This approach is therefore of significant value to rationally optimize novel combination treatments. This article is protected by copyright. All rights reserved.
Dynamic response tests of inertial and optical wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.; Burner, A. W.; Tripp, J. S.; Tcheng, P.; Finley, T. D.; Popernack, T. G., Jr.
1995-01-01
Results are presented for an experimental study of the response of inertial and optical wind-tunnel model attitude measurement systems in a wind-off simulated dynamic environment. This study is part of an ongoing activity at the NASA Langley Research Center to develop high accuracy, advanced model attitude measurement systems that can be used in a dynamic wind-tunnel environment. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration which results in a model attitude measurement bias error. Significant bias errors in model attitude measurement were found for the measurement using the inertial device during wind-off dynamic testing of a model system. The amount of bias present during wind-tunnel tests will depend on the amplitudes of the model dynamic response and the modal characteristics of the model system. Correction models are presented that predict the vibration-induced bias errors to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment. The optical system results were uncorrupted by model vibration in the laboratory setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honarpour, M.; Szpakiewicz, M.; Sharma, B.
This report covers the development of a generic approach to reservoir characterization, the preliminary studies leading to the selection of an appropriate depositional system for detailed study, the application of outcrop studies to quantified reservoir characterization, and the construction of a quantified geological/engineering model used to screen the effects and scales of various geological heterogeneities within a reservoir. These heterogeneities result in large production/residual oil saturation contrasts over small distances. 36 refs., 124 figs., 38 tabs.
The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience
ERIC Educational Resources Information Center
Jones, Rachel; Brown, Dot
2011-01-01
Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…
Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan
2017-01-01
Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.
Hawke, Christine G; Painter, Dorothy M; Kirwan, Paul D; Van Driel, Rosemary R; Baxter, Alan G
2003-01-01
Systemic lupus erythematosus (SLE) is a chronic systemic autoimmune disease characterized by the production of antibodies directed against self antigens. Immune complex glomerulonephritis (GN) is one of the most serious complications of this disorder and can lead to potentially fatal renal failure. The aetiology of SLE is complex and multifactorial, characterized by interacting environmental and genetic factors. Here we examine the nature of the renal pathology in mycobacteria-treated non-obese diabetic (NOD) mice, in order to assess its suitability as a model for studying the aetiopathogenesis of, and possible treatment options for, lupus nephritis (LN) in humans. Both global and segmental proliferative lesions, characterized by increased mesangial matrix and cellularity, were demonstrated on light microscopy, and lesions varied in severity from very mild mesangiopathic GN through to obliteration of capillary lumina and glomerular sclerosis. Mixed isotype immune complexes (IC) consisting of immunoglobulin G (IgG), IgM, IgA and complement C3c were detected using direct immunofluorescence. They were deposited in multiple sites within the glomeruli, as confirmed by electron microscopy. The GN seen in mycobacteria-treated NOD mice therefore strongly resembles the pathology seen in human LN, including mesangiopathic, mesangiocapillary and membranous subclasses of LN. The development of spontaneous mixed isotype IC in the glomeruli of some senescent NOD mice suggests that mycobacterial exposure is accelerating, rather than inducing, the development of GN in this model. PMID:12519305
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
A quantitative model of application slow-down in multi-resource shared systems
Lim, Seung-Hwan; Kim, Youngjae
2016-12-26
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
A quantitative model of application slow-down in multi-resource shared systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Kim, Youngjae
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less
Dynamic Characterization of an Inflatable Concentrator for Solar Thermal Propulsion
NASA Technical Reports Server (NTRS)
Leigh, Larry; Hamidzadeh, Hamid; Tinker, Michael L.; Rodriguez, Pedro I. (Technical Monitor)
2001-01-01
An inflatable structural system that is a technology demonstrator for solar thermal propulsion and other applications is characterized for structural dynamic behavior both experimentally and computationally. The inflatable structure is a pressurized assembly developed for use in orbit to support a Fresnel lens or inflatable lenticular element for focusing sunlight into a solar thermal rocket engine. When the engine temperature reaches a pre-set level, the propellant is injected into the engine, absorbs heat from an exchanger, and is expanded through the nozzle to produce thrust. The inflatable structure is a passively adaptive system in that a regulator and relief valve are utilized to maintain pressure within design limits during the full range of orbital conditions. Modeling and test activities are complicated by the fact that the polyimide film material used for construction of the inflatable is nonlinear, with modulus varying as a function of frequency, temperature, and level of excitation. Modal vibration testing and finite element modeling are described in detail in this paper. The test database is used for validation and modification of the model. This work is highly significant because of the current interest in inflatable structures for space application, and because of the difficulty in accurately modeling such systems.
Toward improved calibration of watershed models: multisite many objective measures of information
USDA-ARS?s Scientific Manuscript database
This paper presents a computational framework for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. The framework consists of four components: (i) an a-priori characterization of system behavior; (ii) a formal an...
Identifying the predominant chemical reductants and pathways for electron transfer in anaerobic systems is paramount to the development of environmental fate models that incorporate pathways for abiotic reductive transformations. Currently, such models do not exist. In this chapt...
Characterization of an in vivo diode dosimetry system for clinical use
Huang, Kai; Bice, William S.; Hidalgo‐Salvatierra, Oscar
2003-01-01
An in vivo dosimetry system that uses p‐type semiconductor diodes with buildup caps was characterized for clinical use on accelerators ranging in energy from 4 to 18 MV. The dose per pulse dependence was investigated. This was done by altering the source‐surface distance, field size, and wedge for photons. The off‐axis correction and effect of changing repetition rate were also investigated. A model was developed to fit the measured two‐dimensional diode correction factors. PACS number(s): 87.66.–a, 87.52.–g PMID:12777148
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.
2009-01-01
Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.
Vega-Villa, K; Pluta, R; Lonser, R; Woo, S
2013-01-01
A long-term sodium nitrite infusion is intended for the treatment of vascular disorders. Phase I data demonstrated a significant nonlinear dose-exposure-toxicity relationship within the therapeutic dosage range. This study aims to develop a quantitative systems pharmacology model characterizing nitric oxide (NO) metabolome and methemoglobin after sodium nitrite infusion. Nitrite, nitrate, and methemoglobin concentration–time profiles in plasma and RBC were used for model development. Following intravenous sodium nitrite administration, nitrite undergoes conversion in RBC and tissue. Nitrite sequestered by RBC interacts more extensively with deoxyhemoglobin, which contributes greatly to methemoglobin formation. Methemoglobin is formed less-than-proportionally at higher nitrite doses as characterized with facilitated methemoglobin removal. Nitrate-to-nitrite reduction occurs in tissue and via entero-salivary recirculation. The less-than-proportional increase in nitrite and nitrate exposure at higher nitrite doses is modeled with a dose-dependent increase in clearance. The model provides direct insight into NO metabolome disposition and is valuable for nitrite dosing selection in clinical trials. PMID:23903463
Thermal barrier coating life-prediction model development. Annual report no. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strangman, T. E.; Neumann, J.; Liu, A.
1986-10-01
The program focuses on predicting the lives of two types of strain-tolerant and oxidation-resistant thermal barrier coating (TBC) systems that are produced by commercial coating suppliers to the gas turbine industry. The plasma-sprayed TBC system, composed of a low-pressure plasma-spray (LPPS) or an argon shrouded plasma-spray (ASPS) applied oxidation resistant NiCrAlY or (CoNiCrAlY) bond coating and an air-plasma-sprayed yttria partially stabilized zirconia insulative layer, is applied by both Chromalloy, Klock, and Union Carbide. The second type of TBS is applied by the electron beam-physical vapor deposition (EB-PVD) process by Temescal. The second year of the program was focused on specimenmore » procurement, TMC system characterization, nondestructive evaluation methods, life prediction model development, and TFE731 engine testing of thermal barrier coated blades. Materials testing is approaching completion. Thermomechanical characterization of the TBC systems, with toughness, and spalling strain tests, was completed. Thermochemical testing is approximately two-thirds complete. Preliminary materials life models for the bond coating oxidation and zirconia sintering failure modes were developed. Integration of these life models with airfoil component analysis methods is in progress. Testing of high pressure turbine blades coated with the program TBS systems is in progress in a TFE731 turbofan engine. Eddy current technology feasibility was established with respect to nondestructively measuring zirconia layer thickness of a TBC system.« less
Enhancing metaproteomics-The value of models and defined environmental microbial systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbst, Florian-Alexander; Lünsmann, Vanessa; Kjeldal, Henrik
Metaproteomicsthe large-scale characterization of the entire protein complement of environmental microbiota at a given point in timehas provided new features to study complex microbial communities in order to unravel these black boxes. Some new technical challenges arose that were not an issue for classical proteome analytics before that could be tackled by the application of different model systems. Here, we review different current and future model systems for metaproteome analysis. We introduce model systems for clinical and biotechnological research questions including acid mine drainage, anaerobic digesters, and activated sludge, following a short introduction to microbial communities and metaproteomics. Model systemsmore » are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability, or reliable protein extraction. Moreover, the implementation of model systems can be considered as a step forward to better understand microbial community responses and ecological functions of single member organisms. In the future, improvements are necessary to fully explore complex environmental systems by metaproteomics.« less
Enhancing metaproteomics-The value of models and defined environmental microbial systems
Herbst, Florian-Alexander; Lünsmann, Vanessa; Kjeldal, Henrik; ...
2016-01-21
Metaproteomicsthe large-scale characterization of the entire protein complement of environmental microbiota at a given point in timehas provided new features to study complex microbial communities in order to unravel these black boxes. Some new technical challenges arose that were not an issue for classical proteome analytics before that could be tackled by the application of different model systems. Here, we review different current and future model systems for metaproteome analysis. We introduce model systems for clinical and biotechnological research questions including acid mine drainage, anaerobic digesters, and activated sludge, following a short introduction to microbial communities and metaproteomics. Model systemsmore » are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability, or reliable protein extraction. Moreover, the implementation of model systems can be considered as a step forward to better understand microbial community responses and ecological functions of single member organisms. In the future, improvements are necessary to fully explore complex environmental systems by metaproteomics.« less
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Novel procedure for characterizing nonlinear systems with memory: 2017 update
NASA Astrophysics Data System (ADS)
Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.
2017-05-01
The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.
Modulation transfer function cascade model for a sampled IR imaging system.
de Luca, L; Cardone, G
1991-05-01
The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.
NASA Astrophysics Data System (ADS)
Shirwaiker, Rohan A.
There have been growing concerns in the global healthcare system about the eradication of pathogens in hospitals and other health-critical environments. The problem has been aggravated by the overuse of antibiotics and antimicrobial agents leading to the emergence of antibiotic-resistant superbugs such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE) which are difficult to kill. Lower immunity of sick patients coupled with the escalating concurrent problem of antibiotic-resistant pathogens has resulted in increasing incidences of hospital acquired (nosocomial) infections. There is an immediate need to control the transmission of such infections, primarily in healthcare environments, by creating touch-contact and work surfaces (e.g., door knobs, push plates, countertops) that utilize alternative antibacterial materials like the heavy metal, silver. Recent research has shown that it is silver in its ionic (Ag+ ) and not elemental form that is antibacterial. Thus, silver-based antibacterial surfaces have to release silver ions directly into the pathogenic environment (generally, an aqueous media) in order to be effective. This dissertation presents the study and analysis of a new silver-based surface system that utilizes low intensity direct electric current (LIDC) for generation of silver ions to primarily inhibit indirect contact transmission of infections. The broader objective of this research is to understand the design, and characterization of the electrically activated silver ion-based antibacterial surface system. The specific objectives of this dissertation include: (1) Developing a comprehensive system design, and identifying and studying its critical design parameters and functional mechanisms. (2) Evaluating effects of the critical design parameters on the antibacterial efficacy of the proposed surface system. (3) Developing a response surface model for the surface system performance. These objectives are achieved by formulating the system design, fabricating prototypes with appropriate design parameters, evaluating the prototypes using various physical and electrical characterization techniques, and characterizing the antibacterial efficacy of the prototypes using statistical experiments. The major contributions of this dissertation include: (1) Design of a systems focused approach that quantifies the potential effectiveness of silver ions under various configurations of the surface system design. (2) Development of meso and micro-scale fabrication methodologies for prototype fabrication. (3) Development of microbiological testing protocols utilizing variance reduction techniques to test the antibacterial efficacy of system prototypes. (4) Development of empirical models for the surface system using factorial design of experiments (DOE). Basic results from the research demonstrate significant antibacterial efficacy of the surface system against four dangerous bacteria including Staph aureus, Escherichia coli, Pseudomonas aeruginosa, and Enterococcus faecalis which are together responsible for more than 80% of nosocomial infections. Results of the DOE characterization study indicate the statistically significant contributions of three system parameters -- size of features, electric current, and type of bacteria -- to the antibacterial performance of the system. This dissertation synergistically utilizes knowledge and principles from three broader areas of research -- industrial engineering, materials science and microbiology -- to model, design, fabricate and characterize an electrically activated silver-ion based antibacterial surface system with practical applications in improving human health and healthcare systems. The research is aimed at promoting novel integrative research and development of technologies utilizing antibacterial properties of silver and other heavy metals.
On the modeling of epidemics under the influence of risk perception
NASA Astrophysics Data System (ADS)
de Lillo, S.; Fioriti, G.; Prioriello, M. L.
An epidemic spreading model is presented in the framework of the kinetic theory of active particles. The model is characterized by the influence of risk perception which can reduce the diffusion of infection. The evolution of the system is modeled through nonlinear interactions, whose output is described by stochastic games. The results of numerical simulations are discussed for different initial conditions.
NASA Astrophysics Data System (ADS)
Sivapalan, M.; Elshafei, Y.; Srinivasan, V.
2014-12-01
A challenging research puzzle in the research on sustainable water management in the Anthropocene is why some societies successfully recover from "ecological destruction" to transition to "successful adaptation" over decadal timescales, while others fail. We present a conceptual modeling framework to understand and characterize these transitions. In this way, we aim to capture the potential drivers of the desired shift towards achieving sustainability of socio-hydrological systems. This is done through a synthesis of detailed socio-hydrological analyses of four river basins in three continents, carried out using different quantitative socio-hydrologic models: Murrumbidgee River Basin in eastern Australia, Lake Toolibin Catchment in Western Australia, Tarim River Basin in Western China and Kissimmee River Basin, in south-east United States. The case studies are analysed using either place-based models designed specifically to mimic observed long-term socio-hydrologic trends, or generic conceptual models with foundations in diverse strands of literature including sustainability science and resilience theory. A comparative analysis of the four case studies reveals a commonality in the building blocks employed to model these socio-hydrologic systems; including water balance, economic, environmental and human-feedback components. Each model reveals varying interpretations of a common organising principle that could explain the shift between productive (socio-economic) and restorative (environmental) forces that was evident in each of these systems observed over a long time frame. The emergent principle is related to the essential drivers of the human feedback component and rests with a general formulation of human well-being, as reflected by both their economic and environmental well-being. It is envisaged that the understanding of the system drivers gained from such a comparative study would enable more targeted water management strategies that can be administered in developing basins to achieve overall sustainability.
ERIC Educational Resources Information Center
Klishas, Andrey A.
2016-01-01
The paper explores the impact of the continental system exerted on the constitutional and political evolution of both the United States and individual states and tries to characterize the development of constitutional review phenomenon within the framework of the continental legal system and the Anglo-Saxon legal system. The research stands on the…
Submicron Systems Architecture Project
1981-11-01
This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.
Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu Zhengcheng; Wen Xiaogang
2009-10-15
We study the renormalization group flow of the Lagrangian for statistical and quantum systems by representing their path integral in terms of a tensor network. Using a tensor-entanglement-filtering renormalization approach that removes local entanglement and produces a coarse-grained lattice, we show that the resulting renormalization flow of the tensors in the tensor network has a nice fixed-point structure. The isolated fixed-point tensors T{sub inv} plus the symmetry group G{sub sym} of the tensors (i.e., the symmetry group of the Lagrangian) characterize various phases of the system. Such a characterization can describe both the symmetry breaking phases and topological phases, asmore » illustrated by two-dimensional (2D) statistical Ising model, 2D statistical loop-gas model, and 1+1D quantum spin-1/2 and spin-1 models. In particular, using such a (G{sub sym},T{sub inv}) characterization, we show that the Haldane phase for a spin-1 chain is a phase protected by the time-reversal, parity, and translation symmetries. Thus the Haldane phase is a symmetry-protected topological phase. The (G{sub sym},T{sub inv}) characterization is more general than the characterizations based on the boundary spins and string order parameters. The tensor renormalization approach also allows us to study continuous phase transitions between symmetry breaking phases and/or topological phases. The scaling dimensions and the central charges for the critical points that describe those continuous phase transitions can be calculated from the fixed-point tensors at those critical points.« less
NASA Astrophysics Data System (ADS)
Vardhan, Shreya; De Tomasi, Giuseppe; Heyl, Markus; Heller, Eric J.; Pollmann, Frank
2017-07-01
We study the effects of local perturbations on the dynamics of disordered fermionic systems in order to characterize time irreversibility. We focus on three different systems: the noninteracting Anderson and Aubry-André-Harper (AAH) models and the interacting spinless disordered t -V chain. First, we consider the effect on the full many-body wave functions by measuring the Loschmidt echo (LE). We show that in the extended or ergodic phase the LE decays exponentially fast with time, while in the localized phase the decay is algebraic. We demonstrate that the exponent of the decay of the LE in the localized phase diverges proportionally to the single-particle localization length as we approach the metal-insulator transition in the AAH model. Second, we probe different phases of disordered systems by studying the time expectation value of local observables evolved with two Hamiltonians that differ by a spatially local perturbation. Remarkably, we find that many-body localized systems could lose memory of the initial state in the long-time limit, in contrast to the noninteracting localized phase where some memory is always preserved.
Metrology applied to ultrasound characterization of trabecular bones using the AIB parameter
NASA Astrophysics Data System (ADS)
Braz, D. S.; Silva, C. E.; Alvarenga, A. V.; Junior, D. S.; Costa-Félix, R. P. B.
2016-07-01
Apparent Integrated Backscattering (AIB) presents correlation between Apparent Backscatter Transfer Function and the transducer bandwidth. Replicas of trabecular bones (cubes of 20 mm side length) created by 3D printing technique were characterized using AIB with a 2.25 MHz center frequency transducer. A mechanical scanning system was used to acquire multiple backscatter signals. An uncertainty model in measurement was proposed based on the Guide to the Expression of Uncertainty in Measurement. Initial AIB results are not metrologically reliable, presenting high measurement uncertainties (sample: 5_0.2032/AIB: -15.1 dB ± 13.9 dB). It is noteworthy that the uncertainty model proposed contributes as unprecedented way for metrological assessment of trabecular bone characterization using AIB.
Hur, Pilwon; Shorter, K Alex; Mehta, Prashant G; Hsiao-Wecksler, Elizabeth T
2012-04-01
In this paper, a novel analysis technique, invariant density analysis (IDA), is introduced. IDA quantifies steady-state behavior of the postural control system using center of pressure (COP) data collected during quiet standing. IDA relies on the analysis of a reduced-order finite Markov model to characterize stochastic behavior observed during postural sway. Five IDA parameters characterize the model and offer physiological insight into the long-term dynamical behavior of the postural control system. Two studies were performed to demonstrate the efficacy of IDA. Study 1 showed that multiple short trials can be concatenated to create a dataset suitable for IDA. Study 2 demonstrated that IDA was effective at distinguishing age-related differences in postural control behavior between young, middle-aged, and older adults. These results suggest that the postural control system of young adults converges more quickly to their steady-state behavior while maintaining COP nearer an overall centroid than either the middle-aged or older adults. Additionally, larger entropy values for older adults indicate that their COP follows a more stochastic path, while smaller entropy values for young adults indicate a more deterministic path. These results illustrate the potential of IDA as a quantitative tool for the assessment of the quiet-standing postural control system.
Modeling and measurement of tissue elastic moduli using optical coherence elastography
NASA Astrophysics Data System (ADS)
Liang, Xing; Oldenburg, Amy L.; Crecea, Vasilica; Kalyanam, Sureshkumar; Insana, Michael F.; Boppart, Stephen A.
2008-02-01
Mechanical forces play crucial roles in tissue growth, patterning and development. To understand the role of mechanical stimuli, biomechanical properties are of great importance, as well as our ability to measure biomechanical properties of developing and engineered tissues. To enable these measurements, a novel non-invasive, micron-scale and high-speed Optical Coherence Elastography (OCE) system has been developed utilizing a titanium:sapphire based spectral-domain Optical Coherence Tomography (OCT) system and a mechanical wave driver. This system provides axial resolution of 3 microns, transverse resolution of 13 microns, and an acquisition rate as high as 25,000 lines per second. External lowfrequency vibrations are applied to the samples in the system. Step and sinusoidal steady-state responses are obtained to first characterize the OCE system and then characterize samples. Experimental results of M-mode OCE on silicone phantoms and human breast tissues are obtained, which correspond to biomechanical models developed for this analysis. Quantified results from the OCE system correspond directly with results from an indentation method from a commercial. With micron-scale resolution and a high-speed acquisition rate, our OCE system also has the potential to rapidly measure dynamic 3-D tissue biomechanical properties.
Developing Xenopus Laevis as a Model to Screen Drugs for Fragile X Syndrome
2014-06-01
demonstrated the capacity to rescue the decreased FMRP expression by gene delivery. We characterized an innate visually-guided avoidance behavior in tadpoles ... tadpole is a unique model system that allows easy access to the nervous system at early stages of development, is amenable to in vivo gene...established quantitative in vivo imaging methods to knockdown and assay synthesis of FMRP in Xenopus tadpole brains. We also established 2 behavioral
Building a Foreign Military Sales Construction Delivery Strategy Decision Support System
1991-09-01
DSS, formulates it into a computer model and produces solutions using information and expert heuristics. Using the Expert Systeic Process to Build a DSS...computer model . There are five stages in the development of an expert system. They are: 1) Identify and characterize the important aspects of the problem...and Steven A. Hidreth. U.S. Security Assistance: The Political Process. Massachusetts: Heath and Company, 1985. 19. Guirguis , Amir A., Program
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Equicontrollability and its application to model-following and decoupling.
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
Discussion of 'model following,' a term used to describe a class of problems characterized by having two dynamic systems, generically known as the 'plant' and the 'model,' it being required to find a controller to attach to the plant so as to make the resultant compensated system behave, in an input/output sense, in the same way as the model. The approach presented to the problem takes a structural point of view. The result is a complex but informative definition which solves the problem as posed. The application of both the algorithm and its basis, equicontrollability, to the decoupling problem is considered.
NASA Astrophysics Data System (ADS)
Vasquez, D. A.; Swift, J. N.; Tan, S.; Darrah, T. H.
2013-12-01
The integration of precise geochemical analyses with quantitative engineering modeling into an interactive GIS system allows for a sophisticated and efficient method of reservoir engineering and characterization. Geographic Information Systems (GIS) is utilized as an advanced technique for oil field reservoir analysis by combining field engineering and geological/geochemical spatial datasets with the available systematic modeling and mapping methods to integrate the information into a spatially correlated first-hand approach in defining surface and subsurface characteristics. Three key methods of analysis include: 1) Geostatistical modeling to create a static and volumetric 3-dimensional representation of the geological body, 2) Numerical modeling to develop a dynamic and interactive 2-dimensional model of fluid flow across the reservoir and 3) Noble gas geochemistry to further define the physical conditions, components and history of the geologic system. Results thus far include using engineering algorithms for interpolating electrical well log properties across the field (spontaneous potential, resistivity) yielding a highly accurate and high-resolution 3D model of rock properties. Results so far also include using numerical finite difference methods (crank-nicholson) to solve for equations describing the distribution of pressure across field yielding a 2D simulation model of fluid flow across reservoir. Ongoing noble gas geochemistry results will also include determination of the source, thermal maturity and the extent/style of fluid migration (connectivity, continuity and directionality). Future work will include developing an inverse engineering algorithm to model for permeability, porosity and water saturation.This combination of new and efficient technological and analytical capabilities is geared to provide a better understanding of the field geology and hydrocarbon dynamics system with applications to determine the presence of hydrocarbon pay zones (or other reserves) and improve oil field management (e.g. perforating, drilling, EOR and reserves estimation)
The primary goal was to asess Hg cycling within a small coastal plain watershed (McTier Creek) using multiple watershed models with distinct mathematical frameworks that emphasize different system dynamics; a secondary goal was to identify current needs in watershed-scale Hg mode...
Engineered cell and tissue models of pulmonary fibrosis.
Sundarakrishnan, Aswin; Chen, Ying; Black, Lauren D; Aldridge, Bree B; Kaplan, David L
2018-04-01
Pulmonary fibrosis includes several lung disorders characterized by scar formation and Idiopathic Pulmonary Fibrosis (IPF) is a particularly severe form of pulmonary fibrosis of unknown etiology with a mean life expectancy of 3years' post-diagnosis. Treatments for IPF are limited to two FDA approved drugs, pirfenidone and nintedanib. Most lead candidate drugs that are identified in pre-clinical animal studies fail in human clinical trials. Thus, there is a need for advanced humanized in vitro models of the lung to improve candidate treatments prior to moving to human clinical trials. The development of 3D tissue models has created systems capable of emulating human lung structure, function, and cell and matrix interactions. The specific models accomplish these features and preliminary studies conducted using some of these systems have shown potential for in vitro anti-fibrotic drug testing. Further characterization and improvements will enable these tissue models to extend their utility for in vitro drug testing, to help identify signaling pathways and mechanisms for new drug targets, and potentially reduce animal models as standard pre-clinical models of study. In the current review, we contrast different in vitro models based on increasing dimensionality (2D, 2.5D and 3D), with added focus on contemporary 3D pulmonary models of fibrosis. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Lindgren, Eric A.
2018-04-01
This paper expands on the objective and motivation for NDE-based characterization and includes a discussion of the current approach using model-assisted inversion being pursued within the Air Force Research Laboratory (AFRL). This includes a discussion of the multiple model-based methods that can be used, including physics-based models, deep machine learning, and heuristic approaches. The benefits and drawbacks of each method is reviewed and the potential to integrate multiple methods is discussed. Initial successes are included to highlight the ability to obtain quantitative values of damage. Additional steps remaining to realize this capability with statistical metrics of accuracy are discussed, and how these results can be used to enable probabilistic life management are addressed. The outcome of this initiative will realize the long-term desired capability of NDE methods to provide quantitative characterization to accelerate certification of new materials and enhance life management of engineered systems.
Characterizing Space Environments with Long-Term Space Plasma Archive Resources
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Miller, J. Scott; Diekmann, Anne M.; Parker, Linda N.
2009-01-01
A significant scientific benefit of establishing and maintaining long-term space plasma data archives is the ready access the archives afford to resources required for characterizing spacecraft design environments. Space systems must be capable of operating in the mean environments driven by climatology as well as the extremes that occur during individual space weather events. Long- term time series are necessary to obtain quantitative information on environment variability and extremes that characterize the mean and worst case environments that may be encountered during a mission. In addition, analysis of large data sets are important to scientific studies of flux limiting processes that provide a basis for establishing upper limits to environment specifications used in radiation or charging analyses. We present applications using data from existing archives and highlight their contributions to space environment models developed at Marshall Space Flight Center including the Chandra Radiation Model, ionospheric plasma variability models, and plasma models of the L2 space environment.
Characterization and isolation of highly purified porcine satellite cells
Ding, Shijie; Wang, Fei; Liu, Yan; Li, Sheng; Zhou, Guanghong; Hu, Ping
2017-01-01
Pig is an important food source and an excellent system to model human diseases. Careful characterization of the swine skeletal muscle stem cells (satellite cells) will shed lights on generation of swine skeletal muscle disease model and efficient production of porcine meat for the food industry. Paired box protein 7 (Pax7) is a highly conserved transcription factor shared by satellite cells from various species. However, the sequence of Pax7 has not been characterized in pig. The lack of method to isolate highly purified satellite cells hinders the thorough characterization of the swine satellite cells. Here we found molecular markers for swine satellite cells and revealed that the porcine satellite cells were heterogeneous in various pieces of skeletal muscle. We further developed a method to isolate highly purified satellite cells directly from porcine muscles using fluorescence-activated cell sorting. We next characterized the proliferation and differentiation abilities of isolated satellite cells in vitro; and found that long-term culturing of satellite cells in vitro led to stemness loss. PMID:28417015
Model Predictive Optimal Control of a Time-Delay Distributed-Parameter Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2006-01-01
This paper presents an optimal control method for a class of distributed-parameter systems governed by first order, quasilinear hyperbolic partial differential equations that arise in many physical systems. Such systems are characterized by time delays since information is transported from one state to another by wave propagation. A general closed-loop hyperbolic transport model is controlled by a boundary control embedded in a periodic boundary condition. The boundary control is subject to a nonlinear differential equation constraint that models actuator dynamics of the system. The hyperbolic equation is thus coupled with the ordinary differential equation via the boundary condition. Optimality of this coupled system is investigated using variational principles to seek an adjoint formulation of the optimal control problem. The results are then applied to implement a model predictive control design for a wind tunnel to eliminate a transport delay effect that causes a poor Mach number regulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.
2001-05-31
This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.
Duality in Power-Law Localization in Disordered One-Dimensional Systems
NASA Astrophysics Data System (ADS)
Deng, X.; Kravtsov, V. E.; Shlyapnikov, G. V.; Santos, L.
2018-03-01
The transport of excitations between pinned particles in many physical systems may be mapped to single-particle models with power-law hopping, 1 /ra . For randomly spaced particles, these models present an effective peculiar disorder that leads to surprising localization properties. We show that in one-dimensional systems almost all eigenstates (except for a few states close to the ground state) are power-law localized for any value of a >0 . Moreover, we show that our model is an example of a new universality class of models with power-law hopping, characterized by a duality between systems with long-range hops (a <1 ) and short-range hops (a >1 ), in which the wave function amplitude falls off algebraically with the same power γ from the localization center.
Solvable Hydrodynamics of Quantum Integrable Systems
NASA Astrophysics Data System (ADS)
Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2017-12-01
The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra, O., E-mail: osierra@sgc.gov.co; Parrado, G., E-mail: gparrado@sgc.gov.co; Cañón, Y.
This paper presents the progress made by the Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey (SGC in its Spanish acronym), towards the characterization of its gamma spectrometric systems for Instrumental Neutron Activation Analysis (INAA), with the aim of introducing corrections to the measurements by variations in sample geometry. Characterization includes the empirical determination of the interaction point of gamma radiation inside the Germanium crystal, through the application of a linear model and the use of a fast Monte Carlo N-Particle (MCNP) software to estimate correction factors for differences in counting efficiency that arise from variations in samplemore » density between samples and standards.« less
Systems Biology of the Vervet Monkey
Jasinska, Anna J.; Schmitt, Christopher A.; Service, Susan K.; Cantor, Rita M.; Dewar, Ken; Jentsch, James D.; Kaplan, Jay R.; Turner, Trudy R.; Warren, Wesley C.; Weinstock, George M.; Woods, Roger P.; Freimer, Nelson B.
2013-01-01
Nonhuman primates (NHP) provide crucial biomedical model systems intermediate between rodents and humans. The vervet monkey (also called the African green monkey) is a widely used NHP model that has unique value for genetic and genomic investigations of traits relevant to human diseases. This article describes the phylogeny and population history of the vervet monkey and summarizes the use of both captive and wild vervet monkeys in biomedical research. It also discusses the effort of an international collaboration to develop the vervet monkey as the most comprehensively phenotypically and genomically characterized NHP, a process that will enable the scientific community to employ this model for systems biology investigations. PMID:24174437
An instrumental electrode model for solving EIT forward problems.
Zhang, Weida; Li, David
2014-10-01
An instrumental electrode model (IEM) capable of describing the performance of electrical impedance tomography (EIT) systems in the MHz frequency range has been proposed. Compared with the commonly used Complete Electrode Model (CEM), which assumes ideal front-end interfaces, the proposed model considers the effects of non-ideal components in the front-end circuits. This introduces an extra boundary condition in the forward model and offers a more accurate modelling for EIT systems. We have demonstrated its performance using simple geometry structures and compared the results with the CEM and full Maxwell methods. The IEM can provide a significantly more accurate approximation than the CEM in the MHz frequency range, where the full Maxwell methods are favoured over the quasi-static approximation. The improved electrode model will facilitate the future characterization and front-end design of real-world EIT systems.
NASA Technical Reports Server (NTRS)
Haldemann, Albert F. C.; Johnson, Jerome B.; Elphic, Richard C.; Boynton, William V.; Wetzel, John
2006-01-01
CRUX is a modular suite of geophysical and borehole instruments combined with display and decision support system (MapperDSS) tools to characterize regolith resources, surface conditions, and geotechnical properties. CRUX is a NASA-funded Technology Maturation Program effort to provide enabling technology for Lunar and Planetary Surface Operations (LPSO). The MapperDSS uses data fusion methods with CRUX instruments, and other available data and models, to provide regolith properties information needed for LPSO that cannot be determined otherwise. We demonstrate the data fusion method by showing how it might be applied to characterize the distribution and form of hydrogen using a selection of CRUX instruments: Borehole Neutron Probe and Thermal Evolved Gas Analyzer data as a function of depth help interpret Surface Neutron Probe data to generate 3D information. Secondary information from other instruments along with physical models improves the hydrogen distribution characterization, enabling information products for operational decision-making.
The Role of Central Nervous System Plasticity in Tinnitus
ERIC Educational Resources Information Center
Saunders, James C.
2007-01-01
Tinnitus is a vexing disorder of hearing characterized by sound sensations originating in the head without any external stimulation. The specific etiology of these sensations is uncertain but frequently associated with hearing loss. The "neurophysiogical" model of tinnitus has enhanced appreciation of central nervous system (CNS) contributions.…
USDA-ARS?s Scientific Manuscript database
Critical to the use of modeling tools for the hydraulic analysis of surface irrigation systems is characterizing the infiltration and hydraulic resistance process. Since those processes are still not well understood, various formulations are currently used to represent them. A software component h...
Mercury transport through stream ecosystems is driven by a complicated set of transport and transformation reactions operating on a variety of scales in the atmosphere, landscape, surface water, and biota. Riverine systems typically have short residence times and can experience l...
Characterizing the evolution of climate networks
NASA Astrophysics Data System (ADS)
Tupikina, L.; Rehfeld, K.; Molkenthin, N.; Stolbova, V.; Marwan, N.; Kurths, J.
2014-06-01
Complex network theory has been successfully applied to understand the structural and functional topology of many dynamical systems from nature, society and technology. Many properties of these systems change over time, and, consequently, networks reconstructed from them will, too. However, although static and temporally changing networks have been studied extensively, methods to quantify their robustness as they evolve in time are lacking. In this paper we develop a theory to investigate how networks are changing within time based on the quantitative analysis of dissimilarities in the network structure. Our main result is the common component evolution function (CCEF) which characterizes network development over time. To test our approach we apply it to several model systems, Erdős-Rényi networks, analytically derived flow-based networks, and transient simulations from the START model for which we control the change of single parameters over time. Then we construct annual climate networks from NCEP/NCAR reanalysis data for the Asian monsoon domain for the time period of 1970-2011 CE and use the CCEF to characterize the temporal evolution in this region. While this real-world CCEF displays a high degree of network persistence over large time lags, there are distinct time periods when common links break down. This phasing of these events coincides with years of strong El Niño/Southern Oscillation phenomena, confirming previous studies. The proposed method can be applied for any type of evolving network where the link but not the node set is changing, and may be particularly useful to characterize nonstationary evolving systems using complex networks.
Models, Measurements, and Local Decisions: Assessing and ...
This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio
1997-01-01
In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Phytoplankton succession in recurrently fluctuating environments.
Roelke, Daniel L; Spatharis, Sofie
2015-01-01
Coastal marine systems are affected by seasonal variations in biogeochemical and physical processes, sometimes leading to alternating periods of reproductive growth limitation within an annual cycle. Transitions between these periods can be sudden or gradual. Human activities, such as reservoir construction and interbasin water transfers, influence these processes and can affect the type of transition between resource loading conditions. How such human activities might influence phytoplankton succession is largely unknown. Here, we employ a multispecies, multi-nutrient model to explore how nutrient loading switching mode might affect phytoplankton succession. The model is based on the Monod-relationship, predicting an instantaneous reproductive growth rate from ambient inorganic nutrient concentrations whereas the limiting nutrient at any given time was determined by Liebig's Law of the Minimum. When these relationships are combined with population loss factors, such as hydraulic displacement of cells associated with inflows, a characterization of a species' niche can be achieved through application of the R* conceptual model, thus enabling an ecological interpretation of modeling results. We found that the mode of reversal in resource supply concentrations had a profound effect. When resource supply reversals were sudden, as expected in systems influenced by pulsed inflows or wind-driven mixing events, phytoplankton were characterized by alternating succession dynamics, a phenomenon documented in inland water bodies of temperate latitudes. When resource supply reversals were gradual, as expected in systems influenced by seasonally developing wet and dry seasons, or annually occurring periods of upwelling, phytoplankton dynamics were characterized by mirror-image succession patterns. This phenomenon has not been reported previously in plankton systems but has been observed in some terrestrial plant systems. These findings suggest that a transition from alternating to "mirror-image" succession patterns might arise with continued coastal zone development, with crucial implications for ecosystems dependent on time-sensitive processes, e.g., spawning events and migration patterns.
Linking genes to ecosystem trace gas fluxes in a large-scale model system
NASA Astrophysics Data System (ADS)
Meredith, L. K.; Cueva, A.; Volkmann, T. H. M.; Sengupta, A.; Troch, P. A.
2017-12-01
Soil microorganisms mediate biogeochemical cycles through biosphere-atmosphere gas exchange with significant impact on atmospheric trace gas composition. Improving process-based understanding of these microbial populations and linking their genomic potential to the ecosystem-scale is a challenge, particularly in soil systems, which are heterogeneous in biodiversity, chemistry, and structure. In oligotrophic systems, such as the Landscape Evolution Observatory (LEO) at Biosphere 2, atmospheric trace gas scavenging may supply critical metabolic needs to microbial communities, thereby promoting tight linkages between microbial genomics and trace gas utilization. This large-scale model system of three initially homogenous and highly instrumented hillslopes facilitates high temporal resolution characterization of subsurface trace gas fluxes at hundreds of sampling points, making LEO an ideal location to study microbe-mediated trace gas fluxes from the gene to ecosystem scales. Specifically, we focus on the metabolism of ubiquitous atmospheric reduced trace gases hydrogen (H2), carbon monoxide (CO), and methane (CH4), which may have wide-reaching impacts on microbial community establishment, survival, and function. Additionally, microbial activity on LEO may facilitate weathering of the basalt matrix, which can be studied with trace gas measurements of carbonyl sulfide (COS/OCS) and carbon dioxide (O-isotopes in CO2), and presents an additional opportunity for gene to ecosystem study. This work will present initial measurements of this suite of trace gases to characterize soil microbial metabolic activity, as well as links between spatial and temporal variability of microbe-mediated trace gas fluxes in LEO and their relation to genomic-based characterization of microbial community structure (phylogenetic amplicons) and genetic potential (metagenomics). Results from the LEO model system will help build understanding of the importance of atmospheric inputs to microorganisms pioneering fresh mineral matrix. Additionally, the measurement and modeling techniques that will be developed at LEO will be relevant for other investigators linking microbial genomics to ecosystem function in more well-developed soils with greater complexity.
Characterization of Used Nuclear Fuel with Multivariate Analysis for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayman, Kenneth J.; Coble, Jamie B.; Orton, Christopher R.
2014-01-01
The Multi-Isotope Process (MIP) Monitor combines gamma spectroscopy and multivariate analysis to detect anomalies in various process streams in a nuclear fuel reprocessing system. Measured spectra are compared to models of nominal behavior at each measurement location to detect unexpected changes in system behavior. In order to improve the accuracy and specificity of process monitoring, fuel characterization may be used to more accurately train subsequent models in a full analysis scheme. This paper presents initial development of a reactor-type classifier that is used to select a reactor-specific partial least squares model to predict fuel burnup. Nuclide activities for prototypic usedmore » fuel samples were generated in ORIGEN-ARP and used to investigate techniques to characterize used nuclear fuel in terms of reactor type (pressurized or boiling water reactor) and burnup. A variety of reactor type classification algorithms, including k-nearest neighbors, linear and quadratic discriminant analyses, and support vector machines, were evaluated to differentiate used fuel from pressurized and boiling water reactors. Then, reactor type-specific partial least squares models were developed to predict the burnup of the fuel. Using these reactor type-specific models instead of a model trained for all light water reactors improved the accuracy of burnup predictions. The developed classification and prediction models were combined and applied to a large dataset that included eight fuel assembly designs, two of which were not used in training the models, and spanned the range of the initial 235U enrichment, cooling time, and burnup values expected of future commercial used fuel for reprocessing. Error rates were consistent across the range of considered enrichment, cooling time, and burnup values. Average absolute relative errors in burnup predictions for validation data both within and outside the training space were 0.0574% and 0.0597%, respectively. The errors seen in this work are artificially low, because the models were trained, optimized, and tested on simulated, noise-free data. However, these results indicate that the developed models may generalize well to new data and that the proposed approach constitutes a viable first step in developing a fuel characterization algorithm based on gamma spectra.« less
High Gain Antenna System Deployment Mechanism Integration, Characterization, and Lessons Learned
NASA Technical Reports Server (NTRS)
Parong, Fil; Russell, Blair; Garcen, Walter; Rose, Chris; Johnson, Chris; Huber, Craig
2014-01-01
The integration and deployment testing of the High Gain Antenna System (HGAS) for the Global Precipitation Measurement mission is summarized. The HGAS deployment mechanism is described. The gravity negation system configuration and its influence on vertical, ground-based deployment tests are presented with test data and model predictions. A focus is made on the late discovery and resolution of a potentially mission-degrading deployment interference condition. The interaction of the flight deployment mechanism, gravity-negation mechanism, and use of dynamic modeling is described and lessons learned presented
High Gain Antenna System Deployment Mechanism Integration, Characterization, and Lessons Learned
NASA Technical Reports Server (NTRS)
Parong, Fil; Russell, Blair; Garcen, Walter; Rose, Chris; Johnson, Chris; Huber, Craig
2014-01-01
The integration and deployment testing of the High Gain Antenna System for the Global Precipitation Measurement mission is summarized. The HGAS deployment mechanism is described. The gravity negation system configuration and its influence on vertical, ground-based, deployment tests are presented with test data and model predictions. A focus is made on the late discovery and resolution of a potentially mission degrading deployment interference condition. The interaction of the flight deployment mechanism, gravity negation mechanism, and use of dynamic modeling is described and lessons learned presented.
Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)
NASA Astrophysics Data System (ADS)
Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.
2017-12-01
The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.
NASA Astrophysics Data System (ADS)
De Domenico, Manlio
2018-03-01
Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks - obtained by aggregating or neglecting temporal or categorical description of biological data - are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.
Stability margin of linear systems with parameters described by fuzzy numbers.
Husek, Petr
2011-10-01
This paper deals with the linear systems with uncertain parameters described by fuzzy numbers. The problem of determining the stability margin of those systems with linear affine dependence of the coefficients of a characteristic polynomial on system parameters is studied. Fuzzy numbers describing the system parameters are allowed to be characterized by arbitrary nonsymmetric membership functions. An elegant solution, graphical in nature, based on generalization of the Tsypkin-Polyak plot is presented. The advantage of the presented approach over the classical robust concept is demonstrated on a control of the Fiat Dedra engine model and a control of the quarter car suspension model.
Evaluation of FNS control systems: software development and sensor characterization.
Riess, J; Abbas, J J
1997-01-01
Functional Neuromuscular Stimulation (FNS) systems activate paralyzed limbs by electrically stimulating motor neurons. These systems have been used to restore functions such as standing and stepping in people with thoracic level spinal cord injury. Research in our laboratory is directed at the design and evaluation of the control algorithms for generating posture and movement. This paper describes software developed for implementing FNS control systems and the characterization of a sensor system used to implement and evaluate controllers in the laboratory. In order to assess FNS control algorithms, we have developed a versatile software package using Lab VIEW (National Instruments, Corp). This package provides the ability to interface with sensor systems via serial port or A/D board, implement data processing and real-time control algorithms, and interface with neuromuscular stimulation devices. In our laboratory, we use the Flock of Birds (Ascension Technology Corp.) motion tracking sensor system to monitor limb segment position and orientation (6 degrees of freedom). Errors in the sensor system have been characterized and nonlinear polynomial models have been developed to account for these errors. With this compensation, the error in the distance measurement is reduced by 90 % so that the maximum error is less than 1 cm.
Method and system for SCR optimization
Lefebvre, Wesley Curt [Boston, MA; Kohn, Daniel W [Cambridge, MA
2009-03-10
Methods and systems are provided for controlling SCR performance in a boiler. The boiler includes one or more generally cross sectional areas. Each cross sectional area can be characterized by one or more profiles of one or more conditions affecting SCR performance and be associated with one or more adjustable desired profiles of the one or more conditions during the operation of the boiler. The performance of the boiler can be characterized by boiler performance parameters. A system in accordance with one or more embodiments of the invention can include a controller input for receiving a performance goal for the boiler corresponding to at least one of the boiler performance parameters and for receiving data values corresponding to boiler control variables and to the boiler performance parameters. The boiler control variables include one or more current profiles of the one or more conditions. The system also includes a system model that relates one or more profiles of the one or more conditions in the boiler to the boiler performance parameters. The system also includes an indirect controller that determines one or more desired profiles of the one or more conditions to satisfy the performance goal for the boiler. The indirect controller uses the system model, the received data values and the received performance goal to determine the one or more desired profiles of the one or more conditions. The system model also includes a controller output that outputs the one or more desired profiles of the one or more conditions.
NASA Technical Reports Server (NTRS)
Ahmad, Anees
1990-01-01
The development of in-house integrated optical performance modelling capability at MSFC is described. This performance model will take into account the effects of structural and thermal distortions, as well as metrology errors in optical surfaces to predict the performance of large an complex optical systems, such as Advanced X-Ray Astrophysics Facility. The necessary hardware and software were identified to implement an integrated optical performance model. A number of design, development, and testing tasks were supported to identify the debonded mirror pad, and rebuilding of the Technology Mirror Assembly. Over 300 samples of Zerodur were prepared in different sizes and shapes for acid etching, coating, and polishing experiments to characterize the subsurface damage and stresses produced by the grinding and polishing operations.
Characterization of the Mysteriously Cool Brown Dwarf HD 4113
NASA Astrophysics Data System (ADS)
Ednie, Michaela; Follette, Katherine; Ward-Duong, Kimberly
2018-01-01
Characterizing the physical properties of brown dwarfs is necessary to expand and improve our understanding of low mass companions, including exoplanets. Systems with both close radial velocity companions and distant directly imaged companions are particularly powerful in understanding planet formation mechanisms. Early in 2017, members of the SPHERE team discovered a companion brown dwarf in the HD 4113 system, which also contains a known RV planet. Atmospheric model fits to the Y and J-band spectra and H2/H3 photometry of the brown dwarf suggested it is unusually cool. We obtained new Magellan data in the Z and K’ bands in mid-2017. This data will help us to complete a more detailed atmospheric and astrometric characterization of this unusually cool companion. Broader wavelength coverage will help in accurate spectral typing and estimations of luminosity, temperature, surface gravity, radius, and composition. Additionally, a second astrometric epoch will help constrain the architecture of the system.
2011-01-01
Background The genus Silene is widely used as a model system for addressing ecological and evolutionary questions in plants, but advances in using the genus as a model system are impeded by the lack of available resources for studying its genome. Massively parallel sequencing cDNA has recently developed into an efficient method for characterizing the transcriptomes of non-model organisms, generating massive amounts of data that enable the study of multiple species in a comparative framework. The sequences generated provide an excellent resource for identifying expressed genes, characterizing functional variation and developing molecular markers, thereby laying the foundations for future studies on gene sequence and gene expression divergence. Here, we report the results of a comparative transcriptome sequencing study of eight individuals representing four Silene and one Dianthus species as outgroup. All sequences and annotations have been deposited in a newly developed and publicly available database called SiESTa, the Silene EST annotation database. Results A total of 1,041,122 EST reads were generated in two runs on a Roche GS-FLX 454 pyrosequencing platform. EST reads were analyzed separately for all eight individuals sequenced and were assembled into contigs using TGICL. These were annotated with results from BLASTX searches and Gene Ontology (GO) terms, and thousands of single-nucleotide polymorphisms (SNPs) were characterized. Unassembled reads were kept as singletons and together with the contigs contributed to the unigenes characterized in each individual. The high quality of unigenes is evidenced by the proportion (49%) that have significant hits in similarity searches with the A. thaliana proteome. The SiESTa database is accessible at http://www.siesta.ethz.ch. Conclusion The sequence collections established in the present study provide an important genomic resource for four Silene and one Dianthus species and will help to further develop Silene as a plant model system. The genes characterized will be useful for future research not only in the species included in the present study, but also in related species for which no genomic resources are yet available. Our results demonstrate the efficiency of massively parallel transcriptome sequencing in a comparative framework as an approach for developing genomic resources in diverse groups of non-model organisms. PMID:21791039
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres
2016-05-16
Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been writtenmore » to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.« less
Characterization of individual mouse cerebrospinal fluid proteomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jeffrey S.; Angel, Thomas E.; Chavkin, Charles
2014-03-20
Analysis of cerebrospinal fluid (CSF) offers key insight into the status of the central nervous system. Characterization of murine CSF proteomes can provide a valuable resource for studying central nervous system injury and disease in animal models. However, the small volume of CSF in mice has thus far limited individual mouse proteome characterization. Through non-terminal CSF extractions in C57Bl/6 mice and high-resolution liquid chromatography-mass spectrometry analysis of individual murine samples, we report the most comprehensive proteome characterization of individual murine CSF to date. Utilizing stringent protein inclusion criteria that required the identification of at least two unique peptides (1% falsemore » discovery rate at the peptide level) we identified a total of 566 unique proteins, including 128 proteins from three individual CSF samples that have been previously identified in brain tissue. Our methods and analysis provide a mechanism for individual murine CSF proteome analysis.« less
NASA Astrophysics Data System (ADS)
Boussouga, Y. A.; Lhassani, A.
2017-03-01
The nanofiltration and the reverse osmosis processes are the most common techniques for the desalination of water contaminated by an excess of salts. In this present study, we were interested in the characterization of commercial, composite and asymmetric membranes of nanofiltration (NF90, NF270) and low pressure reverse osmosis (BW30LE). The two types of characterization that we opted for our study: (i) characterization of electrical proprieties, in terms of the surface charge of various membranes studied by the measurement of the streaming potential, (ii) hydrodynamic characterization in terms of hydraulic permeability with pure water, mass transfer and phenomenological parameters for each system membrane/salt using hydrodynamic approaches. The irreversible thermodynamics allowed us to model the observed retention Robs of salts (NaCl and Na2SO4) for the different membranes studied, to understand and to predict a good filtration with a membrane. A study was conducted on the type of mass transfer for each system membrane/salt: convection and diffusion. The results showed that all tested membranes are negatively charged for the solutions at neutral pH, this is explained by their material composition. The results also showed competitiveness between the different types of membranes. In view of that the NF remains effective in terms of selective retention with less energy consumption than LPRO.
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
Assessment of Hybrid Coordinate Model Velocity Fields During Agulhas Return Current 2012 Cruise
2013-06-01
Forecasts GDEM Generalized Digital Environmental Model GPS Global Positioning System HYCOM HYbrid Coordinate Ocean Model MICOM Miami Isopycnal...speed profiles was climatology from the Generalized Digital Environmental Model ( GDEM ; Teague et al. 1990). Made operational in 1999, the Modular... GDEM was the only tool a naval oceanographer had at his or her disposal to characterize ocean conditions where in-situ observations could not be
Characterization of a Hyperspectral Chromotomographic Imaging Ground System
2012-03-22
developed by the Air Force Institute of Technology (AFIT). The optical model is constructed using Zemax and MATLAB. The model provides the mechanism required...can also be used to incorporate interferometric measurements of optical components and model them in Zemax. The model uses a Zernike Phase Surface to...THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC
Toward Multi-Model Frameworks Addressing Multi-Sector Dynamics, Risks, and Resiliency
NASA Astrophysics Data System (ADS)
Moss, R. H.; Fisher-Vanden, K.; Barrett, C.; Kraucunas, I.; Rice, J.; Sue Wing, I.; Bhaduri, B. L.; Reed, P. M.
2016-12-01
This presentation will report on the findings of recent modeling studies and a series of workshops and other efforts convened under the auspices of the US Global Change Research Program (USGCRP) to improve integration of critical infrastructure, natural resources, integrated assessment, and human systems modeling. The focus is issues related to drought and increased variability of water supply at the energy-water-land nexus. One motivation for the effort is the potential for impact cascades across coupled built, natural, and socioeconomic systems stressed by social and environmental change. The design is for an adaptable modeling framework that will includes a repository of independently-developed modeling tools of varying complexity - from coarser grid, longer time-horizon to higher-resolution shorter-term models of socioeconomic systems, infrastructure, and natural resources. The models draw from three interlocking research communities: Earth system, impacts/adaptation/vulnerability, and integrated assessment. A key lesson will be explored, namely the importance of defining a clear use perspective to limit dimensionality, focus modeling, and facilitate uncertainty characterization and communication.
Discovery and characterization of 3000+ main-sequence binaries from APOGEE spectra
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Ting, Yuan-Sen; Rix, Hans-Walter; Quataert, Eliot; Weisz, Daniel R.; Cargile, Phillip; Conroy, Charlie; Hogg, David W.; Bergemann, Maria; Liu, Chao
2018-05-01
We develop a data-driven spectral model for identifying and characterizing spatially unresolved multiple-star systems and apply it to APOGEE DR13 spectra of main-sequence stars. Binaries and triples are identified as targets whose spectra can be significantly better fit by a superposition of two or three model spectra, drawn from the same isochrone, than any single-star model. From an initial sample of ˜20 000 main-sequence targets, we identify ˜2500 binaries in which both the primary and secondary stars contribute detectably to the spectrum, simultaneously fitting for the velocities and stellar parameters of both components. We additionally identify and fit ˜200 triple systems, as well as ˜700 velocity-variable systems in which the secondary does not contribute detectably to the spectrum. Our model simplifies the process of simultaneously fitting single- or multi-epoch spectra with composite models and does not depend on a velocity offset between the two components of a binary, making it sensitive to traditionally undetectable systems with periods of hundreds or thousands of years. In agreement with conventional expectations, almost all the spectrally identified binaries with measured parallaxes fall above the main sequence in the colour-magnitude diagram. We find excellent agreement between spectrally and dynamically inferred mass ratios for the ˜600 binaries in which a dynamical mass ratio can be measured from multi-epoch radial velocities. We obtain full orbital solutions for 64 systems, including 14 close binaries within hierarchical triples. We make available catalogues of stellar parameters, abundances, mass ratios, and orbital parameters.
Kumar, Sameer
2011-01-01
It is increasingly recognized that hospital operation is an intricate system with limited resources and many interacting sources of both positive and negative feedback. The purpose of this study is to design a surgical delivery process in a county hospital in the U.S where patient flow through a surgical ward is optimized. The system simulation modeling is used to address questions of capacity planning, throughput management and interacting resources which constitute the constantly changing complexity that characterizes designing a contemporary surgical delivery process in a hospital. The steps in building a system simulation model is demonstrated using an example of building a county hospital in a small city in the US. It is used to illustrate a modular system simulation modeling of patient surgery process flows. The system simulation model development will enable planners and designers how they can build in overall efficiencies in a healthcare facility through optimal bed capacity for peak patient flow of emergency and routine patients.
The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields
Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl
2008-01-01
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. PMID:18769680
NASA Astrophysics Data System (ADS)
Jamali, M. S.; Ismail, K. A.; Taha, Z.; Aiman, M. F.
2017-10-01
In designing suitable isolators to reduce unwanted vibration in vehicles, the response from a mathematical model which characterizes the transmissibility ratio of the input and output of the vehicle is required. In this study, a Matlab Simulink model is developed to study the dynamic behaviour performance of passive suspension system for a lightweight electric vehicle. The Simulink model is based on the two degrees of freedom system quarter car model. The model is compared to the theoretical plots of the transmissibility ratios between the amplitudes of the displacements and accelerations of the sprung and unsprung masses to the amplitudes of the ground, against the frequencies at different damping values. It was found that the frequency responses obtained from the theoretical calculations and from the Simulink simulation is comparable to each other. Hence, the model may be extended to a full vehicle model.
Pavon, Lorena Favaro; Sibov, Tatiana Tais; Caminada de Toledo, Silvia Regina; Mara de Oliveira, Daniela; Cabral, Francisco Romero; Gabriel de Souza, Jean; Boufleur, Pamela; Marti, Luciana C; Malheiros, Jackeline Moraes; Ferreira da Cruz, Edgar; Paiva, Fernando F; Malheiros, Suzana M F; de Paiva Neto, Manoel A; Tannús, Alberto; Mascarenhas de Oliveira, Sérgio; Silva, Nasjla Saba; Cappellano, Andrea Maria; Petrilli, Antonio Sérgio; Chudzinski-Tavassi, Ana Marisa; Cavalheiro, Sérgio
2018-04-24
Ependymoma (EPN), the third most common pediatric brain tumor, is a central nervous system (CNS) malignancy originating from the walls of the ventricular system. Surgical resection followed by radiation therapy has been the primary treatment for most pediatric intracranial EPNs. Despite numerous studies into the prognostic value of histological classification, the extent of surgical resection and adjuvant radiotherapy, there have been relatively few studies into the molecular and cellular biology of EPNs. We elucidated the ultrastructure of the cultured EPN cells and characterized their profile of immunophenotypic pluripotency markers (CD133, CD90, SSEA-3, CXCR4). We established an experimental EPN model by the intracerebroventricular infusion of EPN cells labeled with multimodal iron oxide nanoparticles (MION), thereby generating a tumor and providing a clinically relevant animal model. MRI analysis was shown to be a valuable tool when combined with effective MION labeling techniques to accompany EPN growth. We demonstrated that GFAP/CD133+CD90+/CD44+ EPN cells maintained key histopathological and growth characteristics of the original patient tumor. The characterization of EPN cells and the experimental model could facilitate biological studies and preclinical drug screening for pediatric EPNs. In this work, we established notoriously challenging primary cell culture of anaplastic EPNs (WHO grade III) localized in the posterior fossa (PF), using EPNs obtained from 1 to 10-year-old patients ( n = 07), and then characterized their immunophenotype and ultrastructure to finally develop a xenograft model.
Insights into mortality patterns and causes of death through a process point of view model.
Anderson, James J; Li, Ting; Sharrow, David J
2017-02-01
Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.
Insights into mortality patterns and causes of death through a process point of view model
Anderson, James J.; Li, Ting; Sharrow, David J.
2016-01-01
Process point of view models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process point of view, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the 20th century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed. PMID:27885527
NASA Astrophysics Data System (ADS)
Reimus, P. W.
2010-12-01
A process-oriented modeling approach is implemented to examine the importance of parameter variances, correlation lengths, and especially cross-correlations in contaminant transport predictions over large scales. It is shown that the most important consideration is the correlation between flow rates and retardation processes (e.g., sorption, matrix diffusion) in the system. If flow rates are negatively correlated with retardation factors in systems containing multiple flow pathways, then characterizing these negative correlation(s) may have more impact on reactive transport modeling than microscale information. Such negative correlations are expected in porous-media systems where permeability is negatively correlated with clay content and rock alteration (which are usually associated with increased sorption). Likewise, negative correlations are expected in fractured rocks where permeability is positively correlated with fracture apertures, which in turn are negatively correlated with sorption and matrix diffusion. Parameter variances and correlation lengths are also shown to have important effects on reactive transport predictions, but they are less important than parameter cross-correlations. Microscale information pertaining to contaminant transport has become more readily available as characterization methods and spectroscopic instrumentation have achieved lower detection limits, greater resolution, and better precision. Obtaining detailed mechanistic insights into contaminant-rock-water interactions is becoming a routine practice in characterizing reactive transport processes in groundwater systems (almost necessary for high-profile publications). Unfortunately, a quantitative link between microscale information and flow and transport parameter distributions or cross-correlations has not yet been established. One reason for this is that quantitative microscale information is difficult to obtain in complex, heterogeneous systems, so simple systems that lack the complexity and heterogeneity of real aquifer materials are often studied. Another is that instrumentation used to obtain microscale information often probes only one variable or family of variables at a time, so linkages to other variables must be inferred by indirect means from other lines of evidence. Despite these limitations, microscale information can be useful in the development and validation of reactive transport models. For example, knowledge of mineral phases that have strong affinities for contaminants can help in the development of cross-correlations between flow and sorption parameters via characterization of permeability and mineral distributions in aquifers. Likewise, microscale information on pore structures in low-permeability zones and contaminant penetration distances into these zones from higher-permeability zones (e.g., fractures) can provide valuable constraints on the representation of diffusive mass transfer processes between flowing porosity and secondary porosity. The prioritization of obtaining microscale information in any groundwater system can be informed by modeling exercises such as those conducted for this study.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Oberweis, Andreas
Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors
Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S.; Raimondi, Manuela T.; Gottardi, Riccardo
2016-01-01
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized. PMID:27669413
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors.
Iannetti, Laura; D'Urso, Giovanna; Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S; Raimondi, Manuela T; Gottardi, Riccardo; Zunino, Paolo
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized.
NASA Astrophysics Data System (ADS)
Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto
2017-04-01
We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.
Iwata, Michio; Miyawaki-Kuwakado, Atsuko; Yoshida, Erika; Komori, Soichiro; Shiraishi, Fumihide
2018-02-02
In a mathematical model, estimation of parameters from time-series data of metabolic concentrations in cells is a challenging task. However, it seems that a promising approach for such estimation has not yet been established. Biochemical Systems Theory (BST) is a powerful methodology to construct a power-law type model for a given metabolic reaction system and to then characterize it efficiently. In this paper, we discuss the use of an S-system root-finding method (S-system method) to estimate parameters from time-series data of metabolite concentrations. We demonstrate that the S-system method is superior to the Newton-Raphson method in terms of the convergence region and iteration number. We also investigate the usefulness of a translocation technique and a complex-step differentiation method toward the practical application of the S-system method. The results indicate that the S-system method is useful to construct mathematical models for a variety of metabolic reaction networks. Copyright © 2018 Elsevier Inc. All rights reserved.
Pump-to-Wheels Methane Emissions from the Heavy-Duty Transportation Sector.
Clark, Nigel N; McKain, David L; Johnson, Derek R; Wayne, W Scott; Li, Hailin; Akkerman, Vyacheslav; Sandoval, Cesar; Covington, April N; Mongold, Ronald A; Hailer, John T; Ugarte, Orlando J
2017-01-17
Pump-to-wheels (PTW) methane emissions from the heavy-duty (HD) transportation sector, which have climate change implications, are poorly documented. In this study, methane emissions from HD natural gas fueled vehicles and the compressed natural gas (CNG) and liquefied natural gas (LNG) fueling stations that serve them were characterized. A novel measurement system was developed to quantify methane leaks and losses. Engine related emissions were characterized from twenty-two natural gas fueled transit buses, refuse trucks, and over-the-road (OTR) tractors. Losses from six LNG and eight CNG stations were characterized during compression, fuel delivery, storage, and from leaks. Cryogenic boil-off pressure rise and pressure control venting from LNG storage tanks were characterized using theoretical and empirical modeling. Field and laboratory observations of LNG storage tanks were used for model development and evaluation. PTW emissions were combined with a specific scenario to view emissions as a percent of throughput. Vehicle tailpipe and crankcase emissions were the highest sources of methane. Data from this research are being applied by the authors to develop models to forecast methane emissions from the future HD transportation sector.
A Coupled Probabilistic Wake Vortex and Aircraft Response Prediction Model
NASA Technical Reports Server (NTRS)
Gloudemans, Thijs; Van Lochem, Sander; Ras, Eelco; Malissa, Joel; Ahmad, Nashat N.; Lewis, Timothy A.
2016-01-01
Wake vortex spacing standards along with weather and runway occupancy time, restrict terminal area throughput and impose major constraints on the overall capacity and efficiency of the National Airspace System (NAS). For more than two decades, the National Aeronautics and Space Administration (NASA) has been conducting research on characterizing wake vortex behavior in order to develop fast-time wake transport and decay prediction models. It is expected that the models can be used in the systems level design of advanced air traffic management (ATM) concepts that safely increase the capacity of the NAS. It is also envisioned that at a later stage of maturity, these models could potentially be used operationally, in groundbased spacing and scheduling systems as well as on the flight deck.
Bayesian dynamical systems modelling in the social sciences.
Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T
2014-01-01
Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.
Thermospheric dynamics - A system theory approach
NASA Technical Reports Server (NTRS)
Codrescu, M.; Forbes, J. M.; Roble, R. G.
1990-01-01
A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.
Geng, J.; Nlebedim, I. C.; Besser, M. F.; ...
2016-04-15
A bulk combinatorial approach for synthesizing alloy libraries using laser engineered net shaping (LENS; i.e., 3D printing) was utilized to rapidly assess material systems for magnetic applications. The LENS system feeds powders in different ratios into a melt pool created by a laser to synthesize samples with bulk (millimeters) dimensions. By analyzing these libraries with autosampler differential scanning calorimeter/thermal gravimetric analysis and vibrating sample magnetometry, we are able to rapidly characterize the thermodynamic and magnetic properties of the libraries. Furthermore, the Fe-Co binary alloy was used as a model system and the results were compared with data in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean
2016-08-02
This Integrated Research Project (IRP) was established to characterize key limiting phenomena related to the performance of used nuclear fuel (UNF) storage systems. This was an applied engineering project with a specific application in view (i.e., UNF dry storage). The completed tasks made use of a mixture of basic science and engineering methods. The overall objective was to create, or enable the creation of, predictive tools in the form of observation methods, phenomenological models, and databases that will enable the design, installation, and licensing of dry UNF storage systems that will be capable of containing UNF for extended period ofmore » time.« less
Nonlinear Dynamics of a Foil Bearing Supported Rotor System: Simulation and Analysis
NASA Technical Reports Server (NTRS)
Li, Feng; Flowers, George T.
1996-01-01
Foil bearings provide noncontacting rotor support through a number of thin metal strips attached around the circumference of a stator and separated from the rotor by a fluid film. The resulting support stiffness is dominated by the characteristics of the foils and is a nonlinear function of the rotor deflection. The present study is concerned with characterizing this nonlinear effect and investigating its influence on rotordynamical behavior. A finite element model is developed for an existing bearing, the force versus deflection relation characterized, and the dynamics of a sample rotor system are studied. Some conclusions are discussed with regard to appropriate ranges of operation for such a system.
A simulation framework for the CMS Track Trigger electronics
NASA Astrophysics Data System (ADS)
Amstutz, C.; Magazzù, G.; Weber, M.; Palla, F.
2015-03-01
A simulation framework has been developed to test and characterize algorithms, architectures and hardware implementations of the vastly complex CMS Track Trigger for the high luminosity upgrade of the CMS experiment at the Large Hadron Collider in Geneva. High-level SystemC models of all system components have been developed to simulate a portion of the track trigger. The simulation of the system components together with input data from physics simulations allows evaluating figures of merit, like delays or bandwidths, under realistic conditions. The use of SystemC for high-level modelling allows co-simulation with models developed in Hardware Description Languages, e.g. VHDL or Verilog. Therefore, the simulation framework can also be used as a test bench for digital modules developed for the final system.
Fostering synergy between cell biology and systems biology.
Eddy, James A; Funk, Cory C; Price, Nathan D
2015-08-01
In the shared pursuit of elucidating detailed mechanisms of cell function, systems biology presents a natural complement to ongoing efforts in cell biology. Systems biology aims to characterize biological systems through integrated and quantitative modeling of cellular information. The process of model building and analysis provides value through synthesizing and cataloging information about cells and molecules, predicting mechanisms and identifying generalizable themes, generating hypotheses and guiding experimental design, and highlighting knowledge gaps and refining understanding. In turn, incorporating domain expertise and experimental data is crucial for building towards whole cell models. An iterative cycle of interaction between cell and systems biologists advances the goals of both fields and establishes a framework for mechanistic understanding of the genome-to-phenome relationship. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Glisson, Charles; Green, Philip; Williams, Nathaniel J
2012-09-01
The study: (1) provides the first assessment of the a priori measurement model and psychometric properties of the Organizational Social Context (OSC) measurement system in a US nationwide probability sample of child welfare systems; (2) illustrates the use of the OSC in constructing norm-based organizational culture and climate profiles for child welfare systems; and (3) estimates the association of child welfare system-level organizational culture and climate profiles with individual caseworker-level job satisfaction and organizational commitment. The study applies confirmatory factor analysis (CFA) and hierarchical linear models (HLM) analysis to a US nationwide sample of 1,740 caseworkers from 81 child welfare systems participating in the second National Survey of Child and Adolescent Wellbeing (NSCAW II). The participating child welfare systems were selected using a national probability procedure reflecting the number of children served by child welfare systems nationwide. The a priori OSC measurement model is confirmed in this nationwide sample of child welfare systems. In addition, caseworker responses to the OSC scales generate acceptable to high scale reliabilities, moderate to high within-system agreement, and significant between-system differences. Caseworkers in the child welfare systems with the best organizational culture and climate profiles report higher levels of job satisfaction and organizational commitment. Organizational climates characterized by high engagement and functionality, and organizational cultures characterized by low rigidity are associated with the most positive work attitudes. The OSC is the first valid and reliable measure of organizational culture and climate with US national norms for child welfare systems. The OSC provides a useful measure of Organizational Social Context for child welfare service improvement and implementation research efforts which include a focus on child welfare system culture and climate. Copyright © 2012 Elsevier Ltd. All rights reserved.
Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer
NASA Astrophysics Data System (ADS)
Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.
2016-12-01
Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.
Archetypes for Organisational Safety
NASA Technical Reports Server (NTRS)
Marais, Karen; Leveson, Nancy G.
2003-01-01
We propose a framework using system dynamics to model the dynamic behavior of organizations in accident analysis. Most current accident analysis techniques are event-based and do not adequately capture the dynamic complexity and non-linear interactions that characterize accidents in complex systems. In this paper we propose a set of system safety archetypes that model common safety culture flaws in organizations, i.e., the dynamic behaviour of organizations that often leads to accidents. As accident analysis and investigation tools, the archetypes can be used to develop dynamic models that describe the systemic and organizational factors contributing to the accident. The archetypes help clarify why safety-related decisions do not always result in the desired behavior, and how independent decisions in different parts of the organization can combine to impact safety.
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less
Matrix Characterization and Development for the Vacuum Assisted Resin Transfer Molding Process
NASA Technical Reports Server (NTRS)
Grimsley, B. W.; Hubert, P.; Hou, T. H.; Cano, R. J.; Loos, A. C.; Pipes, R. B.
2001-01-01
The curing kinetics and viscosity of an epoxy resin system, SI-ZG-5A, have been characterized for application in the vacuum assisted resin transfer molding (VARTM) process. Impregnation of a typical carbon fiber perform provided the test bed for the characterization. Process simulations were carried out using the process model, COMPRO, to examine heat transfer and curing kinetics for a fully impregnated panel, neglecting resin flow. The predicted viscosity profile and final degree of cure were found to be in good agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Spatial Distribution of Fate and Transport Parameters Using Cxtfit in a Karstified Limestone Model
NASA Astrophysics Data System (ADS)
Toro, J.; Padilla, I. Y.
2017-12-01
Karst environments have a high capacity to transport and store large amounts of water. This makes karst aquifers a productive resource for human consumption and ecological integrity, but also makes them vulnerable to potential contamination of hazardous chemical substances. High heterogeneity and anisotropy of karst aquifer properties make them very difficult to characterize for accurate prediction of contaminant mobility and persistence in groundwater. Current technologies to characterize and quantify flow and transport processes at field-scale is limited by low resolution of spatiotemporal data. To enhance this resolution and provide the essential knowledge of karst groundwater systems, studies at laboratory scale can be conducted. This work uses an intermediate karstified lab-scale physical model (IKLPM) to study fate and transport processes and assess viable tools to characterize heterogeneities in karst systems. Transport experiments are conducted in the IKLPM using step injections of calcium chloride, uranine, and rhodamine wt tracers. Temporal concentration distributions (TCDs) obtained from the experiments are analyzed using the method of moments and CXTFIT to quantify fate and transport parameters in the system at various flow rates. The spatial distribution of the estimated fate and transport parameters for the tracers revealed high variability related to preferential flow heterogeneities and scale dependence. Results are integrated to define spatially-variable transport regions within the system and assess their fate and transport characteristics.
Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.
2018-02-01
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
A causal framework for integrating contemporary and Vedic holism.
Kineman, John J
2017-12-01
Whereas the last Century of science was characterized by epistemological uncertainty; the current Century will likely be characterized by ontological complexity (Gorban and Yablonsky, 2013). Advances in Systems Theory by mathematical biologist Robert Rosen suggest an elegant way forward (Rosen, 2013). "R-theory" (Kineman, 2012) is a synthesis of Rosen's theories explaining complexity and life in terms of a meta-model for 'whole' systems (and their fractions) in terms of "5 th -order holons". Such holons are Rosen "modeling relations" relating system-dependent processes with their formative contexts via closed cycles of four archetypal (Aristotelian) causes. This approach has post-predicted the three most basic taxa of life, plus a quasi-organismic form that may describe proto, component, and ecosystemic life. R-theory thus suggests a fundamentally complex ontology of existence inverting the current view that complexity arises from simple mechanisms. This model of cyclical causality corresponds to the ancient meta-model described in the Vedas and Upanishads of India. Part I of this discussion (Kineman, 2016a) presented a case for associating Vedic philosophy with Harappan civilization, allowing interpretation of ancient concepts of "cosmic order" (Rta) in the Rig Veda, nonduality (advaita), seven-fold beingness (saptanna) and other forms of holism appearing later in the Upanishads. By deciphering the model of wholeness that was applied and tested in ancient times, it is possible to compare, test, and confirm the holon model as a mathematical definition of life, systemic wholeness, and sustainability that may be applied today in modern terms, even as a foundation for holistic science. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling of Wildlife-Associated Zoonoses: Applications and Caveats
Lewis, Bryan L.; Marathe, Madhav; Eubank, Stephen; Blackburn, Jason K.
2012-01-01
Abstract Wildlife species are identified as an important source of emerging zoonotic disease. Accordingly, public health programs have attempted to expand in scope to include a greater focus on wildlife and its role in zoonotic disease outbreaks. Zoonotic disease transmission dynamics involving wildlife are complex and nonlinear, presenting a number of challenges. First, empirical characterization of wildlife host species and pathogen systems are often lacking, and insight into one system may have little application to another involving the same host species and pathogen. Pathogen transmission characterization is difficult due to the changing nature of population size and density associated with wildlife hosts. Infectious disease itself may influence wildlife population demographics through compensatory responses that may evolve, such as decreased age to reproduction. Furthermore, wildlife reservoir dynamics can be complex, involving various host species and populations that may vary in their contribution to pathogen transmission and persistence over space and time. Mathematical models can provide an important tool to engage these complex systems, and there is an urgent need for increased computational focus on the coupled dynamics that underlie pathogen spillover at the human–wildlife interface. Often, however, scientists conducting empirical studies on emerging zoonotic disease do not have the necessary skill base to choose, develop, and apply models to evaluate these complex systems. How do modeling frameworks differ and what considerations are important when applying modeling tools to the study of zoonotic disease? Using zoonotic disease examples, we provide an overview of several common approaches and general considerations important in the modeling of wildlife-associated zoonoses. PMID:23199265
NASA Astrophysics Data System (ADS)
Bernard, F.; Casset, F.; Danel, J. S.; Chappaz, C.; Basrour, S.
2016-08-01
This paper presents for the first time the characterization of a smartphone-size haptic rendering system based on the friction modulation effect. According to previous work and finite element modeling, the homogeneous flexural modes are needed to get the haptic feedback effect. The device studied consists of a thin film AlN transducers deposited on an 110 × 65 mm2 glass substrate. The transducer’s localization on the glass plate allows a transparent central area of 90 × 49 mm2. Electrical and mechanical parameters of the system are extracted from measurement. From this extraction, the electrical impedance matching reduced the applied voltage to 17.5 V AC and the power consumption to 1.53 W at the resonance frequency of the vibrating system to reach the haptic rendering specification. Transient characterizations of the actuation highlight a delay under the dynamic tactile detection. The characterization of the AlN transducers used as sensors, including the noise rejection, the delay or the output charge amplitude allows detections with high accuracy of any variation due to external influences. Those specifications are the first step to a low-power-consumption feedback-looped system.
Crystal plasticity finite element analysis of deformation behaviour in SAC305 solder joint
NASA Astrophysics Data System (ADS)
Darbandi, Payam
Due to the awareness of the potential health hazards associated with the toxicity of lead (Pb), actions have been taken to eliminate or reduce the use of Pb in consumer products. Among those, tin (Sn) solders have been used for the assembly of electronic systems. Anisotropy is of significant importance in all structural metals, but this characteristic is unusually strong in Sn, making Sn based solder joints one of the best examples of the influence of anisotropy. The effect of anisotropy arising from the crystal structure of tin and large grain microstructure on the microstructure and the evolution of constitutive responses of microscale SAC305 solder joints is investigated. Insights into the effects of key microstructural features and dominant plastic deformation mechanisms influencing the measured relative activity of slip systems in SAC305 are obtained from a combination of optical microscopy, orientation imaging microscopy (OIM), slip plane trace analysis and crystal plasticity finite element (CPFE) modeling. Package level SAC305 specimens were subjected to shear deformation in sequential steps and characterized using optical microscopy and OIM to identify the activity of slip systems. X-ray micro Laue diffraction and high energy monochromatic X-ray beam were employed to characterize the joint scale tensile samples to provide necessary information to be able to compare and validate the CPFE model. A CPFE model was developed that can account for relative ease of activating slip systems in SAC305 solder based upon the statistical estimation based on correlation between the critical resolved shear stress and the probability of activating various slip systems. The results from simulations show that the CPFE model developed using the statistical analysis of activity of slip system not only can satisfy the requirements associated with kinematic of plastic deformation in crystal coordinate systems (activity of slip systems) and global coordinate system (shape changes) but also this model is able to predict the evolution of stress in joint level SAC305 sample.
Modeling the system dynamics for nutrient removal in an innovative septic tank media filter.
Xuan, Zhemin; Chang, Ni-Bin; Wanielista, Martin
2012-05-01
A next generation septic tank media filter to replace or enhance the current on-site wastewater treatment drainfields was proposed in this study. Unit operation with known treatment efficiencies, flow pattern identification, and system dynamics modeling was cohesively concatenated in order to prove the concept of a newly developed media filter. A multicompartmental model addressing system dynamics and feedbacks based on our assumed microbiological processes accounting for aerobic, anoxic, and anaerobic conditions in the media filter was constructed and calibrated with the aid of in situ measurements and the understanding of the flow patterns. Such a calibrated system dynamics model was then applied for a sensitivity analysis under changing inflow conditions based on the rates of nitrification and denitrification characterized through the field-scale testing. This advancement may contribute to design such a drainfield media filter in household septic tank systems in the future.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
A SYSTEMS APPROACH TO CHARACTERIZING AND PREDICTING THYROID TOXICITY USING AN AMPHIBIAN MODEL
The EPA was recently mandated to evaluate the potential effects of chemicals on endocrine function and has identified Xenopus as a model organism to use as the basis for a thyroid disruption screening assay. The main objective of this work is to develop a hypothalamic-pituitary-t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Bjornstad, Bruce N.; Christensen, John N.
2010-02-01
The Integrated Field-Scale Subsurface Research Challenge (IFRC) at the Hanford Site 300 Area uranium (U) plume addresses multi-scale mass transfer processes in a complex hydrogeologic setting where groundwater and riverwater interact. A series of forefront science questions on mass transfer are posed for research which relate to the effect of spatial heterogeneities; the importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements and approaches needed to characterize and model a mass-transfer dominated system. The project was initiated in February 2007, with CY 2007 and CY 2008 progress summarized in preceding reports. The site has 35more » instrumented wells, and an extensive monitoring system. It includes a deep borehole for microbiologic and biogeochemical research that sampled the entire thickness of the unconfined 300 A aquifer. Significant, impactful progress has been made in CY 2009 with completion of extensive laboratory measurements on field sediments, field hydrologic and geophysical characterization, four field experiments, and modeling. The laboratory characterization results are being subjected to geostatistical analyses to develop spatial heterogeneity models of U concentration and chemical, physical, and hydrologic properties needed for reactive transport modeling. The field experiments focused on: (1) physical characterization of the groundwater flow field during a period of stable hydrologic conditions in early spring, (2) comprehensive groundwater monitoring during spring to characterize the release of U(VI) from the lower vadose zone to the aquifer during water table rise and fall, (3) dynamic geophysical monitoring of salt-plume migration during summer, and (4) a U reactive tracer experiment (desorption) during the fall. Geophysical characterization of the well field was completed using the down-well Electrical Resistance Tomography (ERT) array, with results subjected to robust, geostatistically constrained inversion analyses. These measurements along with hydrologic characterization have yielded 3D distributions of hydraulic properties that have been incorporated into an updated and increasingly robust hydrologic model. Based on significant findings from the microbiologic characterization of deep borehole sediments in CY 2008, down-hole biogeochemistry studies were initiated where colonization substrates and spatially discrete water and gas samplers were deployed to select wells. The increasingly comprehensive field experimental results, along with the field and laboratory characterization, are leading to a new conceptual model of U(VI) flow and transport in the IFRC footprint and the 300 Area in general, and insights on the microbiological community and associated biogeochemical processes. A significant issue related to vertical flow in the IFRC wells was identified and evaluated during the spring and fall field experimental campaigns. Both upward and downward flows were observed in response to dynamic Columbia River stage. The vertical flows are caused by the interaction of pressure gradients with our heterogeneous hydraulic conductivity field. These impacts are being evaluated with additional modeling and field activities to facilitate interpretation and mitigation. The project moves into CY 2010 with ambitious plans for a drilling additional wells for the IFRC well field, additional experiments, and modeling. This research is part of the ERSP Hanford IFRC at Pacific Northwest National Laboratory.« less
Multiple memory systems as substrates for multiple decision systems
Doll, Bradley B.; Shohamy, Daphna; Daw, Nathaniel D.
2014-01-01
It has recently become widely appreciated that value-based decision making is supported by multiple computational strategies. In particular, animal and human behavior in learning tasks appears to include habitual responses described by prominent model-free reinforcement learning (RL) theories, but also more deliberative or goal-directed actions that can be characterized by a different class of theories, model-based RL. The latter theories evaluate actions by using a representation of the contingencies of the task (as with a learned map of a spatial maze), called an “internal model.” Given the evidence of behavioral and neural dissociations between these approaches, they are often characterized as dissociable learning systems, though they likely interact and share common mechanisms. In many respects, this division parallels a longstanding dissociation in cognitive neuroscience between multiple memory systems, describing, at the broadest level, separate systems for declarative and procedural learning. Procedural learning has notable parallels with model-free RL: both involve learning of habits and both are known to depend on parts of the striatum. Declarative memory, by contrast, supports memory for single events or episodes and depends on the hippocampus. The hippocampus is thought to support declarative memory by encoding temporal and spatial relations among stimuli and thus is often referred to as a relational memory system. Such relational encoding is likely to play an important role in learning an internal model, the representation that is central to model-based RL. Thus, insofar as the memory systems represent more general-purpose cognitive mechanisms that might subserve performance on many sorts of tasks including decision making, these parallels raise the question whether the multiple decision systems are served by multiple memory systems, such that one dissociation is grounded in the other. Here we investigated the relationship between model-based RL and relational memory by comparing individual differences across behavioral tasks designed to measure either capacity. Human subjects performed two tasks, a learning and generalization task (acquired equivalence) which involves relational encoding and depends on the hippocampus; and a sequential RL task that could be solved by either a model-based or model-free strategy. We assessed the correlation between subjects’ use of flexible, relational memory, as measured by generalization in the acquired equivalence task, and their differential reliance on either RL strategy in the decision task. We observed a significant positive relationship between generalization and model-based, but not model-free, choice strategies. These results are consistent with the hypothesis that model-based RL, like acquired equivalence, relies on a more general-purpose relational memory system. PMID:24846190
Transdisciplinary application of the cross-scale resilience model
Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.
2014-01-01
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.
Photophysical characterization of a cytidine-guanosine tethered phthalocyanine-fullerene dyad.
Torres, Tomas; Gouloumis, Andreas; Sanchez-Garcia, David; Jayawickramarajah, Janarthanan; Seitz, Wolfgang; Guldi, Dirk M; Sessler, Jonathan L
2007-01-21
A new non-covalent electron transfer model system, based on the use of cytidine-guanosine hydrogen bonding interactions, is described that incorporates a phthalocyanine photodonor and a C60 fullerene acceptor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
Effects of model layer simplification using composite hydraulic properties
Kuniansky, Eve L.; Sepúlveda, Nicasio; Elango, Lakshmanan
2011-01-01
Groundwater provides much of the fresh drinking water to more than 1.5 billion people in the world (Clarke et al., 1996) and in the United States more that 50 percent of citizens rely on groundwater for drinking water (Solley et al., 1998). As aquifer systems are developed for water supply, the hydrologic system is changed. Water pumped from the aquifer system initially can come from some combination of inducing more recharge, water permanently removed from storage, and decreased groundwater discharge. Once a new equilibrium is achieved, all of the pumpage must come from induced recharge and decreased discharge (Alley et al., 1999). Further development of groundwater resources may result in reductions of surface water runoff and base flows. Competing demands for groundwater resources require good management. Adequate data to characterize the aquifers and confining units of the system, like hydrologic boundaries, groundwater levels, streamflow, and groundwater pumping and climatic data for recharge estimation are to be collected in order to quantify the effects of groundwater withdrawals on wetlands, streams, and lakes. Once collected, three-dimensional (3D) groundwater flow models can be developed and calibrated and used as a tool for groundwater management. The main hydraulic parameters that comprise a regional or subregional model of an aquifer system are the hydraulic conductivity and storage properties of the aquifers and confining units (hydrogeologic units) that confine the system. Many 3D groundwater flow models used to help assess groundwater/surface-water interactions require calculating ?effective? or composite hydraulic properties of multilayered lithologic units within a hydrogeologic unit. The calculation of composite hydraulic properties stems from the need to characterize groundwater flow using coarse model layering in order to reduce simulation times while still representing the flow through the system accurately. The accuracy of flow models with simplified layering and hydraulic properties will depend on the effectiveness of the methods used to determine composite hydraulic properties from a number of lithologic units.
C++, objected-oriented programming, and astronomical data models
NASA Technical Reports Server (NTRS)
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
NASA Technical Reports Server (NTRS)
Lam, N.; Qiu, H.-I.; Quattrochi, Dale A.; Zhao, Wei
1997-01-01
With the rapid increase in spatial data, especially in the NASA-EOS (Earth Observing System) era, it is necessary to develop efficient and innovative tools to handle and analyze these data so that environmental conditions can be assessed and monitored. A main difficulty facing geographers and environmental scientists in environmental assessment and measurement is that spatial analytical tools are not easily accessible. We have recently developed a remote sensing/GIS software module called Image Characterization and Modeling System (ICAMS) to provide specialized spatial analytical tools for the measurement and characterization of satellite and other forms of spatial data. ICAMS runs on both the Intergraph-MGE and Arc/info UNIX and Windows-NT platforms. The main techniques in ICAMS include fractal measurement methods, variogram analysis, spatial autocorrelation statistics, textural measures, aggregation techniques, normalized difference vegetation index (NDVI), and delineation of land/water and vegetated/non-vegetated boundaries. In this paper, we demonstrate the main applications of ICAMS on the Intergraph-MGE platform using Landsat Thematic Mapper images from the city of Lake Charles, Louisiana. While the utilities of ICAMS' spatial measurement methods (e.g., fractal indices) in assessing environmental conditions remain to be researched, making the software available to a wider scientific community can permit the techniques in ICAMS to be evaluated and used for a diversity of applications. The findings from these various studies should lead to improved algorithms and more reliable models for environmental assessment and monitoring.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János
2014-01-01
Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
DebriSat: The New Hypervelocity Impact Test for Satellite Breakup Fragment Characterization
NASA Technical Reports Server (NTRS)
Cowardin, Heather
2015-01-01
To replicate a hyper-velocity fragmentation event using modern-day spacecraft materials and construction techniques to better improve the existing DoD and NASA breakup models: DebriSat is intended to be representative of modern LEO satellites. Major design decisions were reviewed and approved by Aerospace subject matter experts from different disciplines. DebriSat includes 7 major subsystems. Attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. center dotA key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), supporting the development of the DoD and NASA satellite breakup models was conducted at AEDC in 1992. Breakup models based on SOCIT have supported many applications and matched on-orbit events reasonably well over the years.
Dynamics Of Human Motion The Case Study of an Examination Hall
NASA Astrophysics Data System (ADS)
Ogunjo, Samuel; Ajayi, Oluwaseyi; Fuwape, Ibiyinka; Dansu, Emmanuel
Human behaviour is difficult to characterize and generalize due to ITS complex nature. Advances in mathematical models have enabled human systems such as love interaction, alcohol abuse, admission problem to be described using models. This study investigates one of such problems, the dynamics of human motion in an examination hall with limited computer systems such that students write their examination in batches. The examination is characterized by time (t) allocated to each students and difficulty level (dl) associated with the examination. A stochastic model based on the difficulty level of the examination was developed for the prediction of student's motion around the examination hall. A good agreement was obtained between theoretical predictions and numerical simulation. The result obtained will help in better planning of examination session to maximize available resources. Furthermore, results obtained in the research can be extended to other areas such as banking hall, customer service points where available resources will be shared amongst many users.
Correlated receptor transport processes buffer single-cell heterogeneity
Kallenberger, Stefan M.; Unger, Anne L.; Legewie, Stefan; Lymperopoulos, Konstantinos; Eils, Roland
2017-01-01
Cells typically vary in their response to extracellular ligands. Receptor transport processes modulate ligand-receptor induced signal transduction and impact the variability in cellular responses. Here, we quantitatively characterized cellular variability in erythropoietin receptor (EpoR) trafficking at the single-cell level based on live-cell imaging and mathematical modeling. Using ensembles of single-cell mathematical models reduced parameter uncertainties and showed that rapid EpoR turnover, transport of internalized EpoR back to the plasma membrane, and degradation of Epo-EpoR complexes were essential for receptor trafficking. EpoR trafficking dynamics in adherent H838 lung cancer cells closely resembled the dynamics previously characterized by mathematical modeling in suspension cells, indicating that dynamic properties of the EpoR system are widely conserved. Receptor transport processes differed by one order of magnitude between individual cells. However, the concentration of activated Epo-EpoR complexes was less variable due to the correlated kinetics of opposing transport processes acting as a buffering system. PMID:28945754
Fabrication and Characterization of SMA Hybrid Composites
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Lach, Cynthia L.; Cano, Robert J.
2001-01-01
Results from an effort to fabrication shape memory alloy hybrid composite (SMAHC) test specimens and characterize the material system are presented in this study. The SMAHC specimens are conventional composite structures with an embedded SMA constituent. The fabrication and characterization work was undertaken to better understand the mechanics of the material system, address fabrication issues cited in the literature, and provide specimens for experimental validation of a recently developed thermomechanical model for SMAHC structures. Processes and hardware developed for fabrication of the SMAHC specimens are described. Fabrication of a SMA14C laminate with quasi-isotropic lamination and ribbon-type Nitinol actuators embedded in the 0' layers is presented. Beam specimens are machined from the laminate and are the focus of recent work, but the processes and hardware are readily extensible to more practical structures. Results of thermomechanical property testing on the composite matrix and Nitinol ribbon are presented. Test results from the Nitinol include stress-strain behavior, modulus versus temperature. and constrained recovery stress versus temperature and thermal cycle. Complex thermomechanical behaviors of the Nitinol and composite matrix are demonstrated, which have significant implications for modeling of SMAHC structures.
Analytical model and figures of merit for filtered Microwave Photonic Links.
Gasulla, Ivana; Capmany, José
2011-09-26
The concept of filtered Microwave Photonic Links is proposed in order to provide the most general and versatile description of complex analog photonic systems. We develop a field propagation model where a global optical filter, characterized by its optical transfer function, embraces all the intermediate optical components in a linear link. We assume a non-monochromatic light source characterized by an arbitrary spectral distribution which has a finite linewidth spectrum and consider both intensity modulation and phase modulation with balanced and single detection. Expressions leading to the computation of the main figures of merit concerning the link gain, noise and intermodulation distortion are provided which, to our knowledge, are not available in the literature. The usefulness of this derivation resides in the capability to directly provide performance criteria results for complex links just by substituting in the overall closed-form formulas the numerical or measured optical transfer function characterizing the link. This theory is presented thus as a potential tool for a wide range of relevant microwave photonic application cases which is extendable to multiport radio over fiber systems. © 2011 Optical Society of America
Z-Pinch Pulsed Plasma Propulsion Technology Development
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Adams, Robert B.; Fabisinski, Leo; Fincher, Sharon; Maples, C. Dauphne; Miernik, Janie; Percy, Tom; Statham, Geoff; Turner, Matt; Cassibry, Jason;
2010-01-01
Fusion-based propulsion can enable fast interplanetary transportation. Magneto-inertial fusion (MIF) is an approach which has been shown to potentially lead to a low cost, small reactor for fusion break even. The Z-Pinch/dense plasma focus method is an MIF concept in which a column of gas is compressed to thermonuclear conditions by an axial current (I approximates 100 MA). Recent advancements in experiments and the theoretical understanding of this concept suggest favorable scaling of fusion power output yield as I(sup 4). This document presents a conceptual design of a Z-Pinch fusion propulsion system and a vehicle for human exploration. The purpose of this study is to apply Z-Pinch fusion principles to the design of a propulsion system for an interplanetary spacecraft. This study took four steps in service of that objective; these steps are identified below. 1. Z-Pinch Modeling and Analysis: There is a wealth of literature characterizing Z-Pinch physics and existing Z-Pinch physics models. In order to be useful in engineering analysis, simplified Z-Pinch fusion thermodynamic models are required to give propulsion engineers the quantity of plasma, plasma temperature, rate of expansion, etc. The study team developed these models in this study. 2. Propulsion Modeling and Analysis: While the Z-Pinch models characterize the fusion process itself, propulsion models calculate the parameters that characterize the propulsion system (thrust, specific impulse, etc.) The study team developed a Z-Pinch propulsion model and used it to determine the best values for pulse rate, amount of propellant per pulse, and mixture ratio of the D-T and liner materials as well as the resulting thrust and specific impulse of the system. 3. Mission Analysis: Several potential missions were studied. Trajectory analysis using data from the propulsion model was used to determine the duration of the propulsion burns, the amount of propellant expended to complete each mission considered. 4. Vehicle Design: To understand the applicability of Z-Pinch propulsion to interplanetary travel, it is necessary to design a concept vehicle that uses it -- the propulsion system significantly impacts the design of the electrical, thermal control, avionics and structural subsystems of a vehicle. The study team developed a conceptual design of an interplanetary vehicle that transports crew and cargo to Mars and back and can be reused for other missions. Several aspects of this vehicle are based on a previous crewed fusion vehicle study -- the Human Outer Planet Exploration (HOPE) Magnetized Target Fusion (MTF) vehicle. Portions of the vehicle design were used outright and others were modified from the MTF design in order to maintain comparability.
Service Modeling Language Applied to Critical Infrastructure
NASA Astrophysics Data System (ADS)
Baldini, Gianmarco; Fovino, Igor Nai
The modeling of dependencies in complex infrastructure systems is still a very difficult task. Many methodologies have been proposed, but a number of challenges still remain, including the definition of the right level of abstraction, the presence of different views on the same critical infrastructure and how to adequately represent the temporal evolution of systems. We propose a modeling methodology where dependencies are described in terms of the service offered by the critical infrastructure and its components. The model provides a clear separation between services and the underlying organizational and technical elements, which may change in time. The model uses the Service Modeling Language proposed by the W3 consortium for describing critical infrastructure in terms of interdependent services nodes including constraints, behavior, information flows, relations, rules and other features. Each service node is characterized by its technological, organizational and process components. The model is then applied to a real case of an ICT system for users authentication.
System-level modeling of acetone-butanol-ethanol fermentation.
Liao, Chen; Seo, Seung-Oh; Lu, Ting
2016-05-01
Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Bayesian analysis of HAT-P-7b using the EXONEST algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Placek, Ben; Knuth, Kevin H.
2015-01-13
The study of exoplanets (planets orbiting other stars) is revolutionizing the way we view our universe. High-precision photometric data provided by the Kepler Space Telescope (Kepler) enables not only the detection of such planets, but also their characterization. This presents a unique opportunity to apply Bayesian methods to better characterize the multitude of previously confirmed exoplanets. This paper focuses on applying the EXONEST algorithm to characterize the transiting short-period-hot-Jupiter, HAT-P-7b (also referred to as Kepler-2b). EXONEST evaluates a suite of exoplanet photometric models by applying Bayesian Model Selection, which is implemented with the MultiNest algorithm. These models take into accountmore » planetary effects, such as reflected light and thermal emissions, as well as the effect of the planetary motion on the host star, such as Doppler beaming, or boosting, of light from the reflex motion of the host star, and photometric variations due to the planet-induced ellipsoidal shape of the host star. By calculating model evidences, one can determine which model best describes the observed data, thus identifying which effects dominate the planetary system. Presented are parameter estimates and model evidences for HAT-P-7b.« less
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
Characterization of urban air quality using GIS as a management system.
Puliafito, E; Guevara, M; Puliafito, C
2003-01-01
Keeping the air quality acceptable has become an important task for decision makers as well as for non-governmental organizations. Particulate and gaseous emissions of pollutant from industries and auto-exhausts are responsible for rising discomfort, increasing airway diseases, decreasing productivity and the deterioration of artistic and cultural patrimony in urban centers. A model to determine the air quality in urban areas using a geographical information system will be presented here. This system permits the integration, handling, analysis and simulation of spatial and temporal data of the ambient concentration of the main pollutant. It allows the users to characterize and recognize areas with a potential increase or improvement in its air pollution situation. It is also possible to compute past or present conditions by changing basic input information as traffic flow, or stack emission rates. Additionally the model may be used to test the compliance of local standard air quality, to study the environmental impact of new industries or to determine the changes in the conditions when the vehicle circulation is increased.
Energy Cascade in Fermi-Pasta Models
NASA Astrophysics Data System (ADS)
Ponno, A.; Bambusi, D.
We show that, for long-wavelength initial conditions, the FPU dynamics is described, up to a certain time, by two KdV-like equations, which represent the resonant Hamiltonian normal form of the system. The energy cascade taking place in the system is then quantitatively characterized by arguments of dimensional analysis based on such equations.
Smaller Satellite Operations Near Geostationary Orbit
2007-09-01
At the time, this was considered a very difficult task, due to the complexity involved with creating computer code to autonomously perform... computer systems and even permanently damage equipment. Depending on the solar cycle, solar weather will be properly characterized and modeled to...30 Wayne Tomasi. Electronic Communciations Systems. Upper Saddle River: Pearson Education, 2004. 1041
Model transformations for state-space self-tuning control of multivariable stochastic systems
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Bao, Yuan L.; Coleman, Norman P.
1988-01-01
The design of self-tuning controllers for multivariable stochastic systems is considered analytically. A long-division technique for finding the similarity transformation matrix and transforming the estimated left MFD to the right MFD is developed; the derivation is given in detail, and the procedures involved are briefly characterized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
A model for characterizing residential ground current and magnetic field fluctuations.
Mader, D L; Peralta, S B; Sherar, M D
1994-01-01
The current through the residential grounding circuit is an important source for magnetic fields; field variations near the grounding circuit accurately track fluctuations in this ground current. In this paper, a model is presented which permits calculation of the range of these fluctuations. A discrete network model is used to simulate a local distribution system for a single street, and a statistical model to simulate unbalanced currents in the system. Simulations of three-house and ten-house networks show that random appliance operation leads to ground current fluctuations which can be quite large, on the order of 600%. This is consistent with measured fluctuations in an actual house.
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
MacLeod, Miles; Nersessian, Nancy J
2015-02-01
In this paper we draw upon rich ethnographic data of two systems biology labs to explore the roles of explanation and understanding in large-scale systems modeling. We illustrate practices that depart from the goal of dynamic mechanistic explanation for the sake of more limited modeling goals. These processes use abstract mathematical formulations of bio-molecular interactions and data fitting techniques which we call top-down abstraction to trade away accurate mechanistic accounts of large-scale systems for specific information about aspects of those systems. We characterize these practices as pragmatic responses to the constraints many modelers of large-scale systems face, which in turn generate more limited pragmatic non-mechanistic forms of understanding of systems. These forms aim at knowledge of how to predict system responses in order to manipulate and control some aspects of them. We propose that this analysis of understanding provides a way to interpret what many systems biologists are aiming for in practice when they talk about the objective of a "systems-level understanding." Copyright © 2014 Elsevier Ltd. All rights reserved.
Sulfation of ceria-zirconia model automotive emissions control catalysts
NASA Astrophysics Data System (ADS)
Nelson, Alan Edwin
Cerium-zirconium mixed metal oxides are used in automotive emissions control catalysts to regulate the partial pressure of oxygen near the catalyst surface. The near surface oxygen partial pressure is regulated through transfer of atomic oxygen from the ceria-zirconia solid matrix to the platinum group metals to form metal oxides capable of oxidizing carbon monoxide and unburned hydrocarbons. Although the addition of zirconium in the cubic lattice of ceria increases the oxygen storage capacity and thermal stability of the ceria matrix, the cerium-zirconium oxide system remains particularly susceptible to deactivation from sulfur compounds. While the overall effect of sulfur on these systems is understood (partially irreversible deactivation), the fundamental and molecular interaction of sulfur with ceria-zirconia remains a challenging problem. Ceria-zirconia metal oxide solid solutions have been prepared through co-precipitation with nitrate precursors. The prepared powders were calcined and subsequently formed into planer wafers and characterized for chemical and physical attributes. The prepared samples were subsequently exposed to a sulfur dioxide based environment and characterized with spectroscopic techniques to characterize the extent of sulfation and the nature of surface sulfur species. The extent of sulfation of the model ceria-zirconia systems was characterized with Auger electron spectroscopy (AES) prior to and after treatment in a microreactor. Strong dependencies were observed between the atomic ratio of ceria to zirconia and the extent of sulfation. In addition, the partial pressure of sulfur dioxide during treatments also correlated to the extent of sulfation, while temperature only slightly effected the extent of sulfation. The AES data suggests the gas phase sulfur dioxide preferentially chemisorbs on surface ceria atoms and the extent of sulfation is heavily dependent on sulfur dioxide concentrations and only slightly dependent on catalyst temperatures, as confirmed by thermal programmed desorption (TPD). While hydrogen exposure indicated slight sulfur removal, exposure to a redox environment or atmosphere nearly eliminated the quantity of chemisorbed surface sulfur. The nature of sulfur removal is attributed to the inherent redox properties of ceria-zirconia systems. The complete analysis provides mechanistic insight into sulfation dependencies and fundamental information regarding sulfur adsorption on ceria-zirconia model automotive emissions control systems.
Zr Extrusion – Direct Input for Models & Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerreta, Ellen Kathleen
As we examine differences in the high strain rate, high strain tensile response of high purity, highly textured Zr as a function of loading direction, temperature and extrusion velocity with primarily post mortem characterization techniques, we have also developed a technique for characterizing the in-situ extrusion process. This particular measurement is useful for partitioning energy of the system during the extrusion process: friction, kinetic energy, and temperature
NASA Astrophysics Data System (ADS)
Benabderrahmane, A., Sr.
2017-12-01
Hydrogeological site characterization for deep geological high level and intermediate level long lived radioactive waste repository cover a large time scale needed for safety analysis and calculation. Hydrogeological performance of a site relies also on the effects of geodynamic evolution as tectonic uplift, erosion/sedimentation and climate including glaciation on the groundwater flow and solute and heat transfer. Thermo-Hydro-Mechanical model of multilayered aquifer system of Paris Basin is developed to reproduce the present time flow and the natural tracer (Helium) concentration profiles based on the last 2 Ma of geodynamic evolution. Present time geological conceptual model consist of 27 layers at Paris Basin (Triassic-Tertiary) with refinement at project site scale (29 layers from Triassic to Portlandian). Target layers are the clay host formation of Callovo-Oxfrodian age (160 Ma) and the surrounding aquifer layers of Oxfordian and Dogger. Modelled processes are: groundwater flow, heat and solutes (natural tracers) transport, freezing and thawing of groundwater (expansion and retreat of permafrost), deformation of the multilayered aquifer system induced by differential tectonic uplift and the hydro-mechanical stress effect as caused by erosion of the outcropping layers. Numerical simulation considers a period from 2 Ma BP and up to the present. Transient boundary conditions are governed by geodynamic processes: (i) modification of the geometry of the basin and (ii) temperatures along the topography will change according to a series of 15 identical climate cycles with multiple permafrost (glaciation) periods. Numerical model contains 71 layers and 18 million cells. The solution procedure solves three coupled systems of equations, head, temperature and concentrations, by the use of a finite difference method, and by applying extensive parallel processing. The major modelling results related to the processes of importance for site characterization as hydraulic head distribution, flow velocity, heat and natural tracer transport impacted by geodynamic past evolution are discussed.
Quintino, Luis; Manfré, Giuseppe; Wettergren, Erika Elgstrand; Namislo, Angrit; Isaksson, Christina; Lundberg, Cecilia
2013-01-01
Glial cell line–derived neurotrophic factor (GDNF) has great potential to treat Parkinson's disease (PD). However, constitutive expression of GDNF can over time lead to side effects. Therefore, it would be useful to regulate GDNF expression. Recently, a new gene inducible system using destabilizing domains (DD) from E. coli dihydrofolate reductase (DHFR) has been developed and characterized. The advantage of this novel DD is that it is regulated by trimethoprim (TMP), a well-characterized drug that crosses the blood–brain barrier and can therefore be used to regulate gene expression in the brain. We have adapted this system to regulate expression of GDNF. A C-terminal fusion of GDNF and a DD with an additional furin cleavage site was able to be efficiently regulated in vitro, properly processed and was able to bind to canonical GDNF receptors, inducing a signaling cascade response in target cells. In vivo characterization of the protein showed that it could be efficiently induced by TMP and it was only functional when gene expression was turned on. Further characterization in a rodent model of PD showed that the regulated GDNF protected neurons, improved motor behavior of animals and was efficiently regulated in a pathological setting. PMID:23881415
Multilayer Insulation Ascent Venting Model
NASA Technical Reports Server (NTRS)
Tramel, R. W.; Sutherlin, S. G.; Johnson, W. L.
2017-01-01
The thermal and venting transient experienced by tank-applied multilayer insulation (MLI) in the Earth-to-orbit environment is very dynamic and not well characterized. This new predictive code is a first principles-based engineering model which tracks the time history of the mass and temperature (internal energy) of the gas in each MLI layer. A continuum-based model is used for early portions of the trajectory while a kinetic theory-based model is used for the later portions of the trajectory, and the models are blended based on a reference mean free path. This new capability should improve understanding of the Earth-to-orbit transient and enable better insulation system designs for in-space cryogenic propellant systems.
Static shape control for flexible structures
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.
Erguler, Kamil; Stumpf, Michael P H
2011-05-01
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.
NASA Astrophysics Data System (ADS)
Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong
2018-01-01
Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.
Memory mechanisms supporting syntactic comprehension.
Caplan, David; Waters, Gloria
2013-04-01
Efforts to characterize the memory system that supports sentence comprehension have historically drawn extensively on short-term memory as a source of mechanisms that might apply to sentences. The focus of these efforts has changed significantly in the past decade. As a result of changes in models of short-term working memory (ST-WM) and developments in models of sentence comprehension, the effort to relate entire components of an ST-WM system, such as those in the model developed by Baddeley (Nature Reviews Neuroscience 4: 829-839, 2003) to sentence comprehension has largely been replaced by an effort to relate more specific mechanisms found in modern models of ST-WM to memory processes that support one aspect of sentence comprehension--the assignment of syntactic structure (parsing) and its use in determining sentence meaning (interpretation) during sentence comprehension. In this article, we present the historical background to recent studies of the memory mechanisms that support parsing and interpretation and review recent research into this relation. We argue that the results of this research do not converge on a set of mechanisms derived from ST-WM that apply to parsing and interpretation. We argue that the memory mechanisms supporting parsing and interpretation have features that characterize another memory system that has been postulated to account for skilled performance-long-term working memory. We propose a model of the relation of different aspects of parsing and interpretation to ST-WM and long-term working memory.
Implementation of a WRF-CMAQ Air Quality Modeling System in Bogotá, Colombia
NASA Astrophysics Data System (ADS)
Nedbor-Gross, R.; Henderson, B. H.; Pachon, J. E.; Davis, J. R.; Baublitz, C. B.; Rincón, A.
2014-12-01
Due to a continuous economic growth Bogotá, Colombia has experienced air pollution issues in recent years. The local environmental authority has implemented several strategies to curb air pollution that have resulted in the decrease of PM10 concentrations since 2010. However, more activities are necessary in order to meet international air quality standards in the city. The University of Florida Air Quality and Climate group is collaborating with the Universidad de La Salle to prioritize regulatory strategies for Bogotá using air pollution simulations. To simulate pollution, we developed a modeling platform that combines the Weather Research and Forecasting Model (WRF), local emissions, and the Community Multi-scale Air Quality model (CMAQ). This platform is the first of its kind to be implemented in the megacity of Bogota, Colombia. The presentation will discuss development and evaluation of the air quality modeling system, highlight initial results characterizing photochemical conditions in Bogotá, and characterize air pollution under proposed regulatory strategies. The WRF model has been configured and applied to Bogotá, which resides in a tropical climate with complex mountainous topography. Developing the configuration included incorporation of local topography and land-use data, a physics sensitivity analysis, review, and systematic evaluation. The threshold, however, was set based on synthesis of model performance under less mountainous conditions. We will evaluate the impact that differences in autocorrelation contribute to the non-ideal performance. Air pollution predictions are currently under way. CMAQ has been configured with WRF meteorology, global boundary conditions from GEOS-Chem, and a locally produced emission inventory. Preliminary results from simulations show promising performance of CMAQ in Bogota. Anticipated results include a systematic performance evaluation of ozone and PM10, characterization of photochemical sensitivity, and air quality predictions under proposed regulatory scenarios.
Advancing reservoir operation description in physically based hydrological models
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo
2016-04-01
Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir operating strategies.
Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications
NASA Astrophysics Data System (ADS)
Ravela, S.
2015-12-01
Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.
Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency
NASA Technical Reports Server (NTRS)
Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey
2012-01-01
The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.
Year 2 Report: Protein Function Prediction Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C E
2012-04-27
Upon completion of our second year of development in a 3-year development cycle, we have completed a prototype protein structure-function annotation and function prediction system: Protein Function Prediction (PFP) platform (v.0.5). We have met our milestones for Years 1 and 2 and are positioned to continue development in completion of our original statement of work, or a reasonable modification thereof, in service to DTRA Programs involved in diagnostics and medical countermeasures research and development. The PFP platform is a multi-scale computational modeling system for protein structure-function annotation and function prediction. As of this writing, PFP is the only existing fullymore » automated, high-throughput, multi-scale modeling, whole-proteome annotation platform, and represents a significant advance in the field of genome annotation (Fig. 1). PFP modules perform protein functional annotations at the sequence, systems biology, protein structure, and atomistic levels of biological complexity (Fig. 2). Because these approaches provide orthogonal means of characterizing proteins and suggesting protein function, PFP processing maximizes the protein functional information that can currently be gained by computational means. Comprehensive annotation of pathogen genomes is essential for bio-defense applications in pathogen characterization, threat assessment, and medical countermeasure design and development in that it can short-cut the time and effort required to select and characterize protein biomarkers.« less
NASA Astrophysics Data System (ADS)
Nikoueeyan, Pourya; Naughton, Jonathan
2016-11-01
Particle Image Velocimetry is a common choice for qualitative and quantitative characterization of unsteady flows associated with moving bodies (e.g. pitching and plunging airfoils). Characterizing the separated flow behavior is of great importance in understanding the flow physics and developing predictive reduced-order models. In most studies, the model under investigation moves within a fixed camera field-of-view, and vector fields are calculated based on this fixed coordinate system. To better characterize the genesis and evolution of vortical structures in these unsteady flows, the velocity fields need to be transformed into the moving-body frame of reference. Data converted to this coordinate system allow for a more detailed analysis of the flow field using advanced statistical tools. In this work, a pitching NACA0015 airfoil has been used to demonstrate the capability of photogrammetry for such an analysis. Photogrammetry has been used first to locate the airfoil within the image and then to determine an appropriate mask for processing the PIV data. The photogrammetry results are then further used to determine the rotation matrix that transforms the velocity fields to airfoil coordinates. Examples of the important capabilities such a process enables are discussed. P. Nikoueeyan is supported by a fellowship from the University of Wyoming's Engineering Initiative.
Characterization of Oribtal Debris via Hyper-Velocity Ground-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, H.
2015-01-01
Existing DoD and NASA satellite breakup models are based on a key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve the break-up models and the NASA Size Estimation Model (SEM) for events involving more modern satellite designs, the NASA Orbital Debris Program Office has worked in collaboration with the University of Florida to replicate a hypervelocity impact using a satellite built with modern-day spacecraft materials and construction techniques. The spacecraft, called DebriSat, was intended to be a representative of modern LEO satellites and all major designs decisions were reviewed and approved by subject matter experts at Aerospace Corporation. DebriSat is composed of 7 major subsystems including attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. All fragments down to 2 mm is size will be characterized via material, size, shape, bulk density, and the associated data will be stored in a database for multiple users to access. Laboratory radar and optical measurements will be performed on a subset of fragments to provide a better understanding of the data products from orbital debris acquired from ground-based radars and telescopes. The resulting data analysis from DebriSat will be used to update break-up models and develop the first optical SEM in conjunction with updates into the current NASA SEM. The characterization of the fragmentation will be discussed in the subsequent presentation.
Position, scale, and rotation invariant holographic associative memory
NASA Astrophysics Data System (ADS)
Fielding, Kenneth H.; Rogers, Steven K.; Kabrisky, Matthew; Mills, James P.
1989-08-01
This paper describes the development and characterization of a holographic associative memory (HAM) system that is able to recall stored objects whose inputs were changed in position, scale, and rotation. The HAM is based on the single iteration model described by Owechko et al. (1987); however, the system described uses a self-pumped BaTiO3 phase conjugate mirror, rather than a degenerate four-wave mixing proposed by Owechko and his coworkers. The HAM system can store objects in a position, scale, and rotation invariant feature space. The angularly multiplexed diffuse Fourier transform holograms of the HAM feature space are characterized as the memory unit; distorted input objects are correlated with the hologram, and the nonlinear phase conjugate mirror reduces cross-correlation noise and provides object discrimination. Applications of the HAM system are presented.
St-Maurice, Justin D; Burns, Catherine M
2017-07-28
Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient's domain and enable the exploration of the shared decision-making (SDM) paradigm. Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. ©Justin D St-Maurice, Catherine M Burns. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 28.07.2017.
2017-01-01
Background Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. Objective An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. Methods The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Results Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient’s domain and enable the exploration of the shared decision-making (SDM) paradigm. Conclusion Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. PMID:28754650
Bell, Catherine C; Hendriks, Delilah F G; Moro, Sabrina M L; Ellis, Ewa; Walsh, Joanne; Renblom, Anna; Fredriksson Puigvert, Lisa; Dankers, Anita C A; Jacobs, Frank; Snoeys, Jan; Sison-Young, Rowena L; Jenkins, Rosalind E; Nordling, Åsa; Mkrtchian, Souren; Park, B Kevin; Kitteringham, Neil R; Goldring, Christopher E P; Lauschke, Volker M; Ingelman-Sundberg, Magnus
2016-05-04
Liver biology and function, drug-induced liver injury (DILI) and liver diseases are difficult to study using current in vitro models such as primary human hepatocyte (PHH) monolayer cultures, as their rapid de-differentiation restricts their usefulness substantially. Thus, we have developed and extensively characterized an easily scalable 3D PHH spheroid system in chemically-defined, serum-free conditions. Using whole proteome analyses, we found that PHH spheroids cultured this way were similar to the liver in vivo and even retained their inter-individual variability. Furthermore, PHH spheroids remained phenotypically stable and retained morphology, viability, and hepatocyte-specific functions for culture periods of at least 5 weeks. We show that under chronic exposure, the sensitivity of the hepatocytes drastically increased and toxicity of a set of hepatotoxins was detected at clinically relevant concentrations. An interesting example was the chronic toxicity of fialuridine for which hepatotoxicity was mimicked after repeated-dosing in the PHH spheroid model, not possible to detect using previous in vitro systems. Additionally, we provide proof-of-principle that PHH spheroids can reflect liver pathologies such as cholestasis, steatosis and viral hepatitis. Combined, our results demonstrate that the PHH spheroid system presented here constitutes a versatile and promising in vitro system to study liver function, liver diseases, drug targets and long-term DILI.
Bell, Catherine C.; Hendriks, Delilah F. G.; Moro, Sabrina M. L.; Ellis, Ewa; Walsh, Joanne; Renblom, Anna; Fredriksson Puigvert, Lisa; Dankers, Anita C. A.; Jacobs, Frank; Snoeys, Jan; Sison-Young, Rowena L.; Jenkins, Rosalind E.; Nordling, Åsa; Mkrtchian, Souren; Park, B. Kevin; Kitteringham, Neil R.; Goldring, Christopher E. P.; Lauschke, Volker M.; Ingelman-Sundberg, Magnus
2016-01-01
Liver biology and function, drug-induced liver injury (DILI) and liver diseases are difficult to study using current in vitro models such as primary human hepatocyte (PHH) monolayer cultures, as their rapid de-differentiation restricts their usefulness substantially. Thus, we have developed and extensively characterized an easily scalable 3D PHH spheroid system in chemically-defined, serum-free conditions. Using whole proteome analyses, we found that PHH spheroids cultured this way were similar to the liver in vivo and even retained their inter-individual variability. Furthermore, PHH spheroids remained phenotypically stable and retained morphology, viability, and hepatocyte-specific functions for culture periods of at least 5 weeks. We show that under chronic exposure, the sensitivity of the hepatocytes drastically increased and toxicity of a set of hepatotoxins was detected at clinically relevant concentrations. An interesting example was the chronic toxicity of fialuridine for which hepatotoxicity was mimicked after repeated-dosing in the PHH spheroid model, not possible to detect using previous in vitro systems. Additionally, we provide proof-of-principle that PHH spheroids can reflect liver pathologies such as cholestasis, steatosis and viral hepatitis. Combined, our results demonstrate that the PHH spheroid system presented here constitutes a versatile and promising in vitro system to study liver function, liver diseases, drug targets and long-term DILI. PMID:27143246
Measurements of electrostatic double layer potentials with atomic force microscopy
NASA Astrophysics Data System (ADS)
Giamberardino, Jason
The aim of this thesis is to provide a thorough description of the development of theory and experiment pertaining to the electrostatic double layer (EDL) in aqueous electrolytic systems. The EDL is an important physical element of many systems and its behavior has been of interest to scientists for many decades. Because many areas of science and engineering move to test, build, and understand systems at smaller and smaller scales, this work focuses on nanoscopic experimental investigations of the EDL. In that vein, atomic force microscopy (AFM) will be introduced and discussed as a tool for making high spatial resolution measurements of the solid-liquid interface, culminating in a description of the development of a method for completely characterizing the EDL. This thesis first explores, in a semi-historical fashion, the development of the various models and theories that are used to describe the electrostatic double layer. Later, various experimental techniques and ideas are addressed as ways to make measurements of interesting characteristics of the EDL. Finally, a newly developed approach to measuring the EDL system with AFM is introduced. This approach relies on both implementation of existing theoretical models with slight modifications as well as a unique experimental measurement scheme. The model proposed clears up previous ambiguities in definitions of various parameters pertaining to measurements of the EDL and also can be used to fully characterize the system in a way not yet demonstrated.