Sample records for analyze model output

  1. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  2. A Statistical Analysis of the Output Signals of an Acousto-Optic Spectrum Analyzer for CW (Continuous-Wave) Signals

    DTIC Science & Technology

    1988-10-01

    A statistical analysis on the output signals of an acousto - optic spectrum analyzer (AOSA) is performed for the case when the input signal is a...processing, Electronic warfare, Radar countermeasures, Acousto - optic , Spectrum analyzer, Statistical analysis, Detection, Estimation, Canada, Modelling.

  3. Analytical approach for modeling and performance analysis of microring resonators as optical filters with multiple output bus waveguides

    NASA Astrophysics Data System (ADS)

    Lakra, Suchita; Mandal, Sanjoy

    2017-06-01

    A quadruple micro-optical ring resonator (QMORR) with multiple output bus waveguides is mathematically modeled and analyzed by making use of the delay-line signal processing approach in Z-domain and Mason's gain formula. The performances of QMORR with two output bus waveguides with vertical coupling are analyzed. This proposed structure is capable of providing wider free spectral response from both the output buses with appreciable cross talk. Thus, this configuration could provide increased capacity to insert a large number of communication channels. The simulated frequency response characteristic and its dispersion and group delay characteristics are graphically presented using the MATLAB environment.

  4. High-frequency output characteristics of AlGaAs/GaAs heterojunction bipolar transistors for large-signal applications

    NASA Astrophysics Data System (ADS)

    Chen, J.; Gao, G. B.; Ünlü, M. S.; Morkoç, H.

    1991-11-01

    High-frequency ic- vce output characteristics of bipolar transistors, derived from calculated device cutoff frequencies, are reported. The generation of high-frequency output characteristics from device design specifications represents a novel bridge between microwave circuit design and device design: the microwave performance of simulated device structures can be analyzed, or tailored transistor device structures can be designed to fit specific circuit applications. The details of our compact transistor model are presented, highlighting the high-current base-widening (Kirk) effect. The derivation of the output characteristics from the modeled cutoff frequencies are then presented, and the computed characteristics of an AlGaAs/GaAs heterojunction bipolar transistor operating at 10 GHz are analyzed. Applying the derived output characteristics to microwave circuit design, we examine large-signal class A and class B amplification.

  5. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  6. Hazard mitigation with cloud model based rainfall and convective data

    NASA Astrophysics Data System (ADS)

    Gernowo, R.; Adi, K.; Yulianto, T.; Seniyatis, S.; Yatunnisa, A. A.

    2018-05-01

    Heavy rain in Semarang 15 January 2013 causes flood. It is related to dynamic of weather’s parameter, especially with convection process, clouds and rainfall data. In this case, weather condition analysis uses Weather Research and Forecasting (WRF) model used to analyze. Some weather’s parameters show significant result. Their fluctuations prove there is a strong convection that produces convective cloud (Cumulonimbus). Nesting and 2 domains on WRF model show good output to represent weather’s condition commonly. The results of this study different between output cloud cover rate of observation result and output of model around 6-12 hours is because spinning-up of processing. Satellite Images of MTSAT (Multifunctional Transport Satellite) are used as a verification data to prove the result of WRF. White color of satellite image is Coldest Dark Grey (CDG) that indicates there is cloud’s top. This image consolidates that the output of WRF is good enough to analyze Semarang’s condition when the case happened.

  7. Analyzing the impacts of final demand changes on total output using input-output approach: The case of Japanese ICT sectors

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2014-03-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on "pure change" condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change.

  8. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.

    PubMed

    Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua

    2014-10-01

    This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.

  9. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    NASA Astrophysics Data System (ADS)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  10. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  11. Insolation-oriented model of photovoltaic module using Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Huan-Liang

    2010-07-15

    This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less

  12. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  13. Modeling and simulation of queuing system for customer service improvement: A case study

    NASA Astrophysics Data System (ADS)

    Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah

    2016-10-01

    This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.

  14. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  15. Analysis of inter-country input-output table based on bibliographic coupling network: How industrial sectors on the GVC compete for production resources

    NASA Astrophysics Data System (ADS)

    Guan, Jun; Xu, Xiaoyu; Xing, Lizhi

    2018-03-01

    The input-output table is comprehensive and detailed in describing national economic systems with abundance of economic relationships depicting information of supply and demand among industrial sectors. This paper focuses on how to quantify the degree of competition on the global value chain (GVC) from the perspective of econophysics. Global Industrial Strongest Relevant Network models are established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output (ICIO) tables and then have them transformed into Global Industrial Resource Competition Network models to analyze the competitive relationships based on bibliographic coupling approach. Three indicators well suited for the weighted and undirected networks with self-loops are introduced here, including unit weight for competitive power, disparity in the weight for competitive amplitude and weighted clustering coefficient for competitive intensity. Finally, these models and indicators were further applied empirically to analyze the function of industrial sectors on the basis of the latest World Input-Output Database (WIOD) in order to reveal inter-sector competitive status during the economic globalization.

  16. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  17. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  18. Extinction ratio enhancement of SOA-based delayed-interference signal converter using detuned filtering

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Kumar, S.; Yan, L.-S.; Willner, A. E.

    2007-12-01

    We demonstrate experimentally >3 dB extinction ratio improvement at the output of SOA-based delayed-interference signal converter (DISC) using optical off-centered filtering. Through careful modeling of the carrier and the phase dynamics, we explain in detail the origin of sub-pulses in the wavelength converted output, with an emphasis on the time-resolved frequency chirping of the output signal. Through our simulations we conclude that the sub-pulses and the main-pulses are oppositely chirped, which is also verified experimentally by analyzing the output with a chirp form analyzer. We propose and demonstrate an optical off-center filtering technique which effectively suppresses these sub-pulses. The effects of filter detuning and phase bias adjustment in the delayed-interferometer are experimentally characterized and optimized, leading to a >3 dB extinction ratio enhancement of the output signal.

  19. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  20. Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares

    NASA Technical Reports Server (NTRS)

    Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel

    2014-01-01

    The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.

  1. NEMS Freight Transportation Module Improvement Study

    EIA Publications

    2015-01-01

    The U.S. Energy Information Administration (EIA) contracted with IHS Global, Inc. (IHS) to analyze the relationship between the value of industrial output, physical output, and freight movement in the United States for use in updating analytic assumptions and modeling structure within the National Energy Modeling System (NEMS) freight transportation module, including forecasting methodologies and processes to identify possible alternative approaches that would improve multi-modal freight flow and fuel consumption estimation.

  2. Modeling a dielectric elastomer as driven by triboelectric nanogenerator

    NASA Astrophysics Data System (ADS)

    Chen, Xiangyu; Jiang, Tao; Wang, Zhong Lin

    2017-01-01

    By integrating a triboelectric nanogenerator (TENG) and a thin film dielectric elastomer actuator (DEA), the DEA can be directly powered and controlled by the output of the TENG, which demonstrates a self-powered actuation system toward various practical applications in the fields of electronic skin and soft robotics. This paper describes a method to construct a physical model for this integrated TENG-DEA system on the basis of nonequilibrium thermodynamics and electrostatics induction theory. The model can precisely simulate the influences from both the viscoelasticity and current leakage to the output performance of the TENG, which can help us to better understand the interaction between TENG and DEA devices. Accordingly, the established electric field, the deformation strain of the DEA, and the output current from the TENG are systemically analyzed by using this model. A comparison between real measurements and simulation results confirms that the proposed model can predict the dynamic response of the DEA driven by contact-electrification and can also quantitatively analyze the relaxation of the tribo-induced strain due to the leakage behavior. Hence, the proposed model in this work could serve as a guidance for optimizing the devices in the future studies.

  3. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  4. A framework to analyze emissions implications of manufacturing shifts in the industrial sector through integrating bottom-up energy models and economic input-output environmental life cycle assessment models

    EPA Science Inventory

    Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future unc...

  5. A framework to analyze emissions implications of manufacturing shifts in the industrial sector through integrating bottom-up energy models and economic input/output environmental life cycle assessment models

    EPA Science Inventory

    Future year emissions depend highly on economic, technological, societal and regulatory drivers. A scenario framework was adopted to analyze technology development pathways and changes in consumer preferences, and evaluate resulting emissions growth patterns while considering fut...

  6. Identifiability and Performance Analysis of Output Over-sampling Approach to Direct Closed-loop Identification

    NASA Astrophysics Data System (ADS)

    Sun, Lianming; Sano, Akira

    Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.

  7. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  8. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  9. A note on scrap in the 1992 U.S. input-output tables

    USGS Publications Warehouse

    Swisko, George M.

    2000-01-01

    Introduction A key concern of industrial ecology and life cycle analysis is the disposal and recycling of scrap. One might conclude that the U.S. input-output tables are appropriate tools for analyzing scrap flows. Duchin, for instance, has suggested using input-output analysis for industrial ecology, indicating that input-output economics can trace the stocks and flows of energy and other materials from extraction through production and consumption to recycling or disposal. Lave and others use input-output tables to design life cycle assessment models for studying product design, materials use, and recycling strategies, even with the knowledge that these tables suffer from a lack of comprehensive and detailed data that may never be resolved. Although input-output tables can offer general guidance about the interdependence of economic and environmental processes, data reporting by industry and the economic concepts underlying these tables pose problems for rigorous material flow examinations. This is especially true for analyzing the output of scrap and scrap flows in the United States and estimating the amount of scrap that can be recycled. To show how data reporting has affected the values of scrap in recent input-output tables, this paper focuses on metal scrap generated in manufacturing. The paper also briefly discusses scrap that is not included in the input-output tables and some economic concepts that limit the analysis of scrap flows.

  10. Optical Limiting Using the Two-Photon Absorption Electrical Modulation Effect in HgCdTe Photodiode

    PubMed Central

    Cui, Haoyang; Yang, Junjie; Zeng, Jundong; Tang, Zhong

    2013-01-01

    The electrical modulation properties of the output intensity of two-photon absorption (TPA) pumping were analyzed in this paper. The frequency dispersion dependence of TPA and the electric field dependence of TPA were calculated using Wherrett theory model and Garcia theory model, respectively. Both predicted a dramatic variation of TPA coefficient which was attributed into the increasing of the transition rate. The output intensity of the laser pulse propagation in the pn junction device was calculated by using function-transfer method. It shows that the output intensity increases nonlinearly with increasing intensity of incident light and eventually reaches saturation. The output saturation intensity depends on the electric field strength; the greater the electric field, the smaller the output intensity. Consequently, the clamped saturation intensity can be controlled by the electric field. The prior advantage of electrical modulation is that the TPA can be varied extremely continuously, thus adjusting the output intensity in a wide range. This large change provides a manipulate method to control steady output intensity of TPA by adjusting electric field. PMID:24198721

  11. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  12. Available pressure amplitude of linear compressor based on phasor triangle model

    NASA Astrophysics Data System (ADS)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  13. Application of Artificial Neural Network to Optical Fluid Analyzer

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Nishida, Katsuhiko

    1994-04-01

    A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).

  14. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2015-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  15. An Integrated Architecture for Aircraft Engine Performance Monitoring and Fault Diagnostics: Engine Test Results

    NASA Technical Reports Server (NTRS)

    Rinehart, Aidan W.; Simon, Donald L.

    2014-01-01

    This paper presents a model-based architecture for performance trend monitoring and gas path fault diagnostics designed for analyzing streaming transient aircraft engine measurement data. The technique analyzes residuals between sensed engine outputs and model predicted outputs for fault detection and isolation purposes. Diagnostic results from the application of the approach to test data acquired from an aircraft turbofan engine are presented. The approach is found to avoid false alarms when presented nominal fault-free data. Additionally, the approach is found to successfully detect and isolate gas path seeded-faults under steady-state operating scenarios although some fault misclassifications are noted during engine transients. Recommendations for follow-on maturation and evaluation of the technique are also presented.

  16. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Diode-end-pumped solid-state lasers with dual gain media for multi-wavelength emission

    NASA Astrophysics Data System (ADS)

    Cho, C. Y.; Chang, C. C.; Chen, Y. F.

    2015-01-01

    We develop a theoretical model for designing a compact efficient multi-wavelength laser with dual gain media in a shared resonator. The developed model can be used to analyze the optimal output reflectivity for each wavelength to achieve maximum output power for multi-wavelength emission. We further demonstrate a dual-wavelength laser at 946 nm and 1064 nm with Nd:YAG and Nd:YVO4 crystals to confirm the numerical analysis. Under optimum conditions and at incident pump power of 17 W, output power at 946 nm and 1064 nm was up to 2.51 W and 2.81 W, respectively.

  18. Analysis and compensation of an aircraft simulator control loading system with compliant linkage. [using hydraulic equipment

    NASA Technical Reports Server (NTRS)

    Johnson, P. R.; Bardusch, R. E.

    1974-01-01

    A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.

  19. Modeling of a resonant heat engine

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2012-12-01

    A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.

  20. Heart Performance Determination by Visualization in Larval Fishes: Influence of Alternative Models for Heart Shape and Volume

    PubMed Central

    Perrichon, Prescilla; Grosell, Martin; Burggren, Warren W.

    2017-01-01

    Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model) applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output. PMID:28725199

  1. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  2. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  3. Comparative study of diode-pumped alkali vapor laser and exciplex-pumped alkali laser systems and selection principal of parameters

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui

    2017-03-01

    A theoretical model based on common pump structure is proposed to analyze the output characteristics of a diode-pumped alkali vapor laser (DPAL) and XPAL (exciplex-pumped alkali laser). Cs-DPAL and Cs-Ar XPAL systems are used as examples. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-level XPAL systems with broadband pumping, which is several times the pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell's length, mixed gas concentration, pumped linewidth, and output coupler are analyzed for DPAL and XPAL systems based on the kinetic model. In addition, the predictions of selection principal of temperature and cell's length are also presented. The concept of the equivalent "alkali areal density" is proposed. The result shows that the output characteristics with the same alkali areal density but different temperatures turn out to be equal for either the DPAL or the XPAL system. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented.

  4. USEEIO: a New and Transparent United States Environmentally Extended Input-Output Model

    EPA Science Inventory

    National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts ...

  5. Theoretical analyses of an injection-locked diode-pumped rubidium vapor laser.

    PubMed

    Cai, He; Gao, Chunqing; Liu, Xiaoxu; Wang, Shunyan; Yu, Hang; Rong, Kepeng; An, Guofei; Han, Juhong; Zhang, Wei; Wang, Hongyuan; Wang, You

    2018-04-02

    Diode-pumped alkali lasers (DPALs) have drawn much attention since they were proposed in 2001. The narrow-linewidth DPAL can be potentially applied in the fields of coherent communication, laser radar, and atomic spectroscopy. In this study, we propose a novel protocol to narrow the width of one kind of DPAL, diode-pumped rubidium vapor laser (DPRVL), by use of an injection locking technique. A kinetic model is first set up for an injection-locked DPRVL with the end-pumped configuration. The laser tunable duration is also analyzed for a continuous wave (CW) injection-locked DPRVL system. Then, the influences of the pump power, power of a master laser, and reflectance of an output coupler on the output performance are theoretically analyzed. The study should be useful for design of a narrow-linewidth DPAL with the relatively high output.

  6. Modeling and vibration control of the flapping-wing robotic aircraft with output constraint

    NASA Astrophysics Data System (ADS)

    He, Wei; Mu, Xinxing; Chen, Yunan; He, Xiuyu; Yu, Yao

    2018-06-01

    In this paper, we propose the boundary control for undesired vibrations suppression with output constraint of the flapping-wing robotic aircraft (FWRA). We also present the dynamics of the flexible wing of FWRA with governing equations and boundary conditions, which are partial differential equations (PDEs) and ordinary differential equations (ODEs), respectively. An energy-based barrier Lyapunov function is introduced to analyze the system stability and prevent violation of output constraint. With the effect of the proposed boundary controller, distributed states of the system remain in the constrained spaces. Then the IBLF-based boundary controls are proposed to assess the stability of the FWRA in the presence of output constraint.

  7. The impacts of final demand changes on total output of Indonesian ICT sectors: An analysis using input-output approach

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2014-06-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Indonesian Information and Communication Technology (ICT) sectors. This study employs Input-Output (IO) analysis as a tool of analysis. More specifically, demand-pull IO quantity model is applied in order to achieve the objective. "Whole sector change" and "pure change" conditions are considered in this study. The results of calculation show that, in both conditions, the biggest positive impact to the total output of the sectors is given by the change of households consumption while the change of import has a negative impact. One of the recommendations suggested from this study is to construct import restriction policy for ICT products.

  8. Laser ignition

    DOEpatents

    Early, James W.; Lester, Charles S.

    2004-01-13

    Sequenced pulses of light from an excitation laser with at least two resonator cavities with separate output couplers are directed through a light modulator and a first polarzing analyzer. A portion of the light not rejected by the first polarizing analyzer is transported through a first optical fiber into a first ignitor laser rod in an ignitor laser. Another portion of the light is rejected by the first polarizing analyzer and directed through a halfwave plate into a second polarization analyzer. A first portion of the output of the second polarization analyzer passes through the second polarization analyzer to a second, oscillator, laser rod in the ignitor laser. A second portion of the output of the second polarization analyzer is redirected by the second polarization analyzer to a second optical fiber which delays the beam before the beam is combined with output of the first ignitor laser rod. Output of the second laser rod in the ignitor laser is directed into the first ignitor laser rod which was energized by light passing through the first polarizing analyzer. Combined output of the first ignitor laser rod and output of the second optical fiber is focused into a combustible fuel where the first short duration, high peak power pulse from the ignitor laser ignites the fuel and the second long duration, low peak power pulse directly from the excitation laser sustains the combustion.

  9. Analysis of material flow in a utillzation technology of low grade manganese ore and sulphur coal complementary

    NASA Astrophysics Data System (ADS)

    Wang, Bo-Zhi; Deng, Biao; Su, Shi-Jun; Ding, Sang-Lan; Sun, Wei-Yi

    2018-03-01

    Electrolytic manganese is conventionally produced through low-grade manganese ore leaching in SO2, with the combustion of high sulfur coal. Subsequently the coal ash and manganese slag, produced by the combustion of high sulfur coal and preparation of electrolytic manganese, can be used as raw ingredients for the preparation of sulphoaluminate cement. In order to realize the `coal-electricity-sulfur-manganese-building material' system of complementary resource utilization, the conditions of material inflow and outflow in each process were determined using material flow analysis. The material flow models in each unit and process can be obtained by analyzed of material flow for new technology, and the input-output model could be obtained. Through the model, it is possible to obtain the quantity of all the input and output material in the condition of limiting the quantity of a substance. Taking one ton electrolytic manganese as a basis, the quantity of other input material and cements can be determined with the input-output model. The whole system had thusly achieved a cleaner production level. Therefore, the input-output model can be used for guidance in practical production.

  10. Dynamic analysis of a buckled asymmetric piezoelectric beam for energy harvesting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Blarigan, Louis, E-mail: louis01@umail.ucsb.edu; Moehlis, Jeff

    2016-03-15

    A model of a buckled beam energy harvester is analyzed to determine the phenomena behind the transition between high and low power output levels. It is shown that the presence of a chaotic attractor is a sufficient condition to predict high power output, though there are relatively small areas where high output is achieved without a chaotic attractor. The chaotic attractor appears as a product of a period doubling cascade or a boundary crisis. Bifurcation diagrams provide insight into the development of the chaotic region as the input power level is varied, as well as the intermixed periodic windows.

  11. Smoothing effect for spatially distributed renewable resources and its impact on power grid robustness.

    PubMed

    Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki

    2017-03-01

    In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.

  12. Enhancement of the output emission efficiency of thin-film photoluminescence composite structures based on PbSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.

    2010-12-15

    The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less

  13. Theoretical foundations for environmental Kuznets curve analysis

    NASA Astrophysics Data System (ADS)

    Lantz, Van

    This thesis provides a dynamic theory for analyzing the paths of aggregate output and pollution in a country over time. An infinite horizon, competitive growth-pollution model is explored in order to determine the role that economic scale, production techniques, and pollution regulations play in explaining the inverted U-shaped relationship between output and some forms of pollution (otherwise known as the Environmental Kuznets Curve, or EKC). Results indicate that the output-pollution relationship may follow a strictly increasing, strictly decreasing (but bounded), inverted U-shaped, or some combination of curves. While the 'scale' effect may cause output and pollution to exhibit a monotonic relationship, 'technique' and 'regulation' effects may ultimately cause a de-linking of these two variables. Pollution-minimizing energy regulation policies are also investigated within this framework. It is found that the EKC may be 'flattened' or even eliminated moving from a poorly-regulated economy to one that minimizes pollution. The model is calibrated to the US economy for output (gross national product, GNP) and two pollutants (sulfur dioxide, SO2, and carbon dioxide, CO2) over the period 1900 to 1990. Results indicate that the model replicates the observations quite well. The predominance of 'scale' effects cause aggregate SO2 and CO2 levels to increase with GNP in the early stages of development. Then, in the case of SO 2, 'technique' and 'regulation' effects may be the cause of falling SO2 levels with continued economic growth (establishing the EKC). CO2 continues to monotonically increase as output levels increase over time. The positive relationship may be due to the lack of regulations on this pollutant. If stricter regulation policies were instituted in the two case studies, an improved allocation of resources may result. While GNP may be 2.596 to 20% lower than what has been realized in the US economy (depending on the pollution variable analyzed), individual welfare may increase from lower pollution levels.

  14. Fully Burdened Cost of Energy Analysis: A Model for Marine Corps Systems

    DTIC Science & Technology

    2013-01-30

    creation of the output distribution since they are not required values for a triangular distribution. The model has the capacity to analyze a wide...Partnerships. (2009). Army energy security implementation strategy. Washington, DC: Government Printing Office. Bell Helicopter. (n.d.). The Bell AH-1Z Zulu

  15. International trade inoperability input-output model (IT-IIM): theory and application.

    PubMed

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  16. Modeling and validation of single-chamber microbial fuel cell cathode biofilm growth and response to oxidant gas composition

    NASA Astrophysics Data System (ADS)

    Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2016-10-01

    This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions in a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.

  17. Modeling and validation of single-chamber microbial fuel cell cathode biofilm growth and response to oxidant gas composition

    DOE PAGES

    Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; ...

    2016-08-15

    This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions inmore » a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Lastly, simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.« less

  18. Study of hydrological extremes - floods and droughts in global river basins using satellite data and model output

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Fayne, J.; Bolten, J. D.

    2016-12-01

    We will use satellite data from TRMM (Tropical Rainfall Measurement Mission), AMSR (Advanced Microwave Scanning Radiometer), GRACE (Gravity Recovery and Climate Experiment) and MODIS (Moderate Resolution Spectroradiometer) and model output from NASA GLDAS (Global Land Data Assimilation System) to understand the linkages between hydrological variables. These hydrological variables include precipitation soil moisture vegetation index surface temperature ET and total water. We will present results for major river basins such as Amazon, Colorado, Mississippi, California, Danube, Nile, Congo, Yangtze Mekong, Murray-Darling and Ganga-Brahmaputra.The major floods and droughts in these watersheds will be mapped in time and space using the satellite data and model outputs mentioned above. We will analyze the various hydrological variables and conduct a synergistic study during times of flood and droughts. In order to compare hydrological variables between river basins with vastly different climate and land use we construct an index that is scaled by the climatology. This allows us to compare across different climate, topography, soils and land use regimes. The analysis shows that the hydrological variables derived from satellite data and NASA models clearly reflect the hydrological extremes. This is especially true when data from different sensors are analyzed together - for example rainfall data from TRMM and total water data from GRACE. Such analyses will help to construct prediction tools for water resources applications.

  19. Comparison of individual-based model output to data using a model of walleye pollock early life history in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu

    2016-10-01

    Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.

  20. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    NASA Astrophysics Data System (ADS)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  1. Implementation of a digital evaluation platform to analyze bifurcation based nonlinear amplifiers

    NASA Astrophysics Data System (ADS)

    Feldkord, Sven; Reit, Marco; Mathis, Wolfgang

    2016-09-01

    Recently, nonlinear amplifiers based on the supercritical Andronov-Hopf bifurcation have become a focus of attention, especially in the modeling of the mammalian hearing organ. In general, to gain deeper insights in the input-output behavior, the analysis of bifurcation based amplifiers requires a flexible framework to exchange equations and adjust certain parameters. A DSP implementation is presented which is capable to analyze various amplifier systems. Amplifiers based on the Andronov-Hopf and Neimark-Sacker bifurcations are implemented and compared exemplarily. It is shown that the Neimark-Sacker system remarkably outperforms the Andronov-Hopf amplifier regarding the CPU usage. Nevertheless, both show a similar input-output behavior over a wide parameter range. Combined with an USB-based control interface connected to a PC, the digital framework provides a powerful instrument to analyze bifurcation based amplifiers.

  2. Re-using biological devices: a model-aided analysis of interconnected transcriptional cascades designed from the bottom-up.

    PubMed

    Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo

    2017-01-01

    The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.

  3. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  4. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  5. Comparative study of DPAL and XPAL systems and selection principal of parameters

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui

    2016-10-01

    A theoretical model based on common pump structure is proposed to analyze the laser output characteristics of DPAL (Diode pumped alkali vapor laser) and XPAL (Exciplex pumped alkali laser) in this paper. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-XPAL systems with broadband pumping which is several times of pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell' s length, mixed gas concentration, pumped linewidth and output mirror reflectivity are analyzed for DPAL and XPAL systems basing on the kinetic model. The result shows a better performance in Cs-Ar XPAL laser with requirements of relatively high Ar concentration, high pumped intensity and high temperature. Comparatively, for Cs-DPAL laser, lower temperature and lower pumped intensity should be acquired. In addition, the predictions of selection principal of temperature and cell's length are also presented. The conception of the equivalent "alkali areal density" is proposed in this paper. It is defined as the product of the alkali density and cell's length. The result shows that the output characteristics of DPAL (or XPAL) system with the same alkali areal density but different temperatures turn out to be equal. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented. The detailed results of continuous-wave DPAL and XPAL performances as a function of pumped laser linewidth and mixed gas pressure are presented along with an analysis of influences of output coupler.

  6. An econometric model of the hardwood lumber market

    Treesearch

    William G. Luppold

    1982-01-01

    A recursive econometric model with causal flow originating from the demand relationship is used to analyze the effects of exogenous variables on quantity and price of hardwood lumber. Wage rates, interest rates, stumpage price, lumber exports, and price of lumber demanders' output were the major factors influencing quantities demanded and supplied and hardwood...

  7. RDT&E Laboratory Capacity Utilization and Productivity Measurement Methods for Financial Decision-Making within DON.

    DTIC Science & Technology

    1998-06-01

    process or plant can complete using a 24-hour, seven-day operation with zero waste , i.e., the maximum output capability, allowing no adjustment for...models: • Resource Effectiveness Model: > Analyzes economic impact of capacity management decisions > Assumes that " zero waste " is the goal > Supports

  8. The dynamics of total outputs of Indonesian industrial sectors: A further study

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2017-03-01

    The purpose of the current study is to extend the previous studies which analyze the impacts of final demands modifications on the total outputs of industrial sectors of a particular country. More specifically, the study conducts the analysis regarding the impacts on the total outputs of Indonesian industrial sectors. The study employs a demand-pull Input-Output (IO) quantity model, one of the calculation instruments in the IO analysis. The study focuses on seventeen industries. There are two scenarios used in this study, namely other final demands and import modifications. The “whole sector change” condition is implemented in the calculations. An initial period in this study is 2010. The results show that the positive impacts on the total outputs of focused sectors are distributed by scenario 1, the change of other final demands. On the contrary, the negative impacts are delivered by scenario 2, the modification of imports. The suggestions for improving the total outputs of discussed industries are based on the results.

  9. Dynamic Estimation on Output Elasticity of Highway Capital Stock in China

    NASA Astrophysics Data System (ADS)

    Li, W. J.; Zuo, Q. L.; Bai, Y. F.

    2017-12-01

    By using the Perpetual Inventory Method to calculate the capital stock of highway in China from 1988 to 2016, the paper builds the State Space Model based on Translog Production Function, according to the Ridge Regression and Kalman Filter Method, the dynamic estimation results of output elasticity are measured continuously and analyzed. The conclusions show that: Firstly, China’s growth speed on highway industry capital stock are divided into three stages which are respectively from 1988 to 2000, from 2001 to 2009 and from 2010 to 2016, during which shows steady growth, between which reflect rapid growth; Secondly, the output elasticity of highway capital stock, being between 0.154 and 0.248, is slightly larger than the output elasticity of human input factor, lower than the output elasticity of the technical level, shows positive effect on transport economy and rises steadily, but the output efficiency is low on the whole; Thirdly, around the year of 2010, the scale pay on highway industry begins to highlight the characteristic of increase.

  10. Numerical investigation of output beam quality in efficient broadband optical parametric chirped pulse amplification

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Di; Xu, Lu; Liang, Xiao-Yan

    2017-01-01

    We theoretically analyzed output beam quality of broad bandwidth non-collinear optical parametric chirped pulse amplification (NOPCPA) in LiB3O5 (LBO) centered at 800 nm. With a three-dimensional numerical model, the influence of the pump intensity, pump and signal spatial modulations, and the walk-off effect on the OPCPA output beam quality are presented, together with conversion efficiency and the gain spectrum. The pump modulation is a dominant factor that affects the output beam quality. Comparatively, the influence of signal modulation is insignificant. For a low-energy system with small beam sizes, walk-off effect has to be considered. Pump modulation and walk-off effect lead to asymmetric output beam profile with increased modulation. A special pump modulation type is found to optimize output beam quality and efficiency. For a high-energy system with large beam sizes, the walk-off effect can be neglected, certain back conversion is beneficial to reduce the output modulation. A trade-off must be made between the output beam quality and the conversion efficiency, especially when the pump modulation is large since. A relatively high conversion efficiency and a low output modulation are both achievable by controlling the pump modulation and intensity.

  11. Description and Features of UX-Analyze

    DTIC Science & Technology

    2009-02-01

    POB model and GUI for EM63 Inversion The full Pasion -Oldenburg-Billings (POB) analysis assumes an axially symmetric (axial and transverse) tensor...output from the EM63 inversion. 1 Pasion , L.R., and Oldenburg, D.W., 2001, Locating and

  12. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging.

    PubMed

    Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo

    2015-02-01

    The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV -1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV -1 . The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10 -5 μm -1 . The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10 -5 μm -1 . To the author's knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure. © 2015 American Association of Physicists in Medicine.

  13. Light output measurements and computational models of microcolumnar CsI scintillators for x-ray imaging.

    PubMed

    Nillius, Peter; Klamra, Wlodek; Sibczynski, Pawel; Sharma, Diksha; Danielsson, Mats; Badano, Aldo

    2015-02-01

    The authors report on measurements of light output and spatial resolution of microcolumnar CsI:Tl scintillator detectors for x-ray imaging. In addition, the authors discuss the results of simulations aimed at analyzing the results of synchrotron and sealed-source exposures with respect to the contributions of light transport to the total light output. The authors measured light output from a 490-μm CsI:Tl scintillator screen using two setups. First, the authors used a photomultiplier tube (PMT) to measure the response of the scintillator to sealed-source exposures. Second, the authors performed imaging experiments with a 27-keV monoenergetic synchrotron beam and a slit to calculate the total signal generated in terms of optical photons per keV. The results of both methods are compared to simulations obtained with hybridmantis, a coupled x-ray, electron, and optical photon Monte Carlo transport package. The authors report line response (LR) and light output for a range of linear absorption coefficients and describe a model that fits at the same time the light output and the blur measurements. Comparing the experimental results with the simulations, the authors obtained an estimate of the absorption coefficient for the model that provides good agreement with the experimentally measured LR. Finally, the authors report light output simulation results and their dependence on scintillator thickness and reflectivity of the backing surface. The slit images from the synchrotron were analyzed to obtain a total light output of 48 keV−1 while measurements using the fast PMT instrument setup and sealed-sources reported a light output of 28 keV−1. The authors attribute the difference in light output estimates between the two methods to the difference in time constants between the camera and PMT measurements. Simulation structures were designed to match the light output measured with the camera while providing good agreement with the measured LR resulting in a bulk absorption coefficient of 5 × 10−5μm−1. The combination of experimental measurements for microcolumnar CsI:Tl scintillators using sealed-sources and synchrotron exposures with results obtained via simulation suggests that the time course of the emission might play a role in experimental estimates. The procedure yielded an experimentally derived linear absorption coefficient for microcolumnar Cs:Tl of 5 × 10−5μm−1. To the author’s knowledge, this is the first time this parameter has been validated against experimental observations. The measurements also offer insight into the relative role of optical transport on the effective optical yield of the scintillator with microcolumnar structure.

  14. Optimal design of a vibration-based energy harvester using magnetostrictive material (MsM)

    NASA Astrophysics Data System (ADS)

    Hu, J.; Xu, F.; Huang, A. Q.; Yuan, F. G.

    2011-01-01

    In this study, an optimal vibration-based energy harvesting system using magnetostrictive material (MsM) was designed and tested to enable the powering of a wireless sensor. In particular, the conversion efficiency, converting from magnetic to electric energy, is approximately modeled from the magnetic field induced by the beam vibration. A number of factors that affect the output power such as the number of MsM layers, coil design and load matching are analyzed and explored in the design optimization. From the measurements, the open-circuit voltage can reach 1.5 V when the MsM cantilever beam operates at the second natural frequency 324 Hz. The AC output power is 970 µW, giving a power density of 279 µW cm - 3. The attempt to use electrical reactive components (either inductors or capacitors) to resonate the system at any frequency has also been analyzed and tested experimentally. The results showed that this approach is not feasible to optimize the power. Since the MsM device has low output voltage characteristics, a full-wave quadrupler has been designed to boost the rectified output voltage. To deliver the maximum output power to the load, a complex conjugate impedance matching between the load and the MsM device is implemented using a discontinuous conduction mode (DCM) buck-boost converter. The DC output power after the voltage quadrupler reaches 705 µW and the corresponding power density is 202 µW cm - 3. The output power delivered to a lithium rechargeable battery is around 630 µW, independent of the load resistance.

  15. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain

    PubMed Central

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432

  16. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain.

    PubMed

    Xing, Lizhi

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.

  17. CesrTA Retarding Field Analyzer Modeling Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvey, J.R.; Celata, C.M.; Crittenden, J.A.

    2010-05-23

    Retarding field analyzers (RFAs) provide an effective measure of the local electron cloud density and energy distribution. Proper interpretation of RFA data can yield information about the behavior of the cloud, as well as the surface properties of the instrumented vacuum chamber. However, due to the complex interaction of the cloud with the RFA itself, understanding these measurements can be nontrivial. This paper examines different methods for interpreting RFA data via cloud simulation programs. Techniques include postprocessing the output of a simulation code to predict the RFA response; and incorporating an RFA model into the cloud modeling program itself.

  18. Sensitivity analysis of a short distance atmospheric dispersion model applied to the Fukushima disaster

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien

    2015-04-01

    In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.

  19. The Use of Molecular Modeling as "Pseudoexperimental" Data for Teaching VSEPR as a Hands-On General Chemistry Activity

    ERIC Educational Resources Information Center

    Martin, Christopher B.; Vandehoef, Crissie; Cook, Allison

    2015-01-01

    A hands-on activity appropriate for first-semester general chemistry students is presented that combines traditional VSEPR methods of predicting molecular geometries with introductory use of molecular modeling. Students analyze a series of previously calculated output files consisting of several molecules each in various geometries. Each structure…

  20. Analyzing Company Economics Using the Leontief Open Production Model

    ERIC Educational Resources Information Center

    Laumakis, Paul J.

    2008-01-01

    This article details the application of an economic theory to the fiscal operation of a small engineering consulting firm. Nobel Prize-winning economist Wassily Leontief developed his general input-output economic theory in the mid-twentieth century to describe the flow of goods and services in the U.S. economy. We use one mathematical model that…

  1. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  2. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  3. Laser/lidar analysis and testing

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1994-01-01

    Section 1 of this report details development of a model of the output pulse frequency spectrum of a pulsed transversely excited (TE) CO2 laser. In order to limit the computation time required, the model was designed around a generic laser pulse shape model. The use of such a procedure allows many possible laser configurations to be examined. The output pulse shape is combined with the calculated frequency chirp to produce the electric field of the output pulse which is then computationally mixed with a local oscillator field to produce the heterodyne beat signal that would fall on a detector. The power spectral density of this heterodyne signal is then calculated. Section 2 reports on a visit to the LAWS laser contractors to measure the performance of the laser breadboards. The intention was to acquire data using a digital oscilloscope so that it could be analyzed. Section 3 reports on a model developed to assess the power requirements of a 5J LAWS instrument on a Spot MKII platform in a polar orbit. The performance was assessed for three different latitude dependent sampling strategies.

  4. Numerical Investigation of Flapwise-Torsional Vibration Model of a Smart Section Blade with Microtab

    DOE PAGES

    Li, Nailu; Balas, Mark J.; Yang, Hua; ...

    2015-01-01

    This paper presents a method to develop an aeroelastic model of a smart section blade equipped with microtab. The model is suitable for potential passive vibration control study of the blade section in classic flutter. Equations of the model are described by the nondimensional flapwise and torsional vibration modes coupled with the aerodynamic model based on the Theodorsen theory and aerodynamic effects of the microtab based on the wind tunnel experimental data. The aeroelastic model is validated using numerical data available in the literature and then utilized to analyze the microtab control capability on flutter instability case and divergence instabilitymore » case. The effectiveness of the microtab is investigated with the scenarios of different output controllers and actuation deployments for both instability cases. The numerical results show that the microtab can effectively suppress both vibration modes with the appropriate choice of the output feedback controller.« less

  5. Numerical Investigation of Flapwise-Torsional Vibration Model of a Smart Section Blade with Microtab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nailu; Balas, Mark J.; Yang, Hua

    2015-01-01

    This study presents a method to develop an aeroelastic model of a smart section blade equipped with microtab. The model is suitable for potential passive vibration control study of the blade section in classic flutter. Equations of the model are described by the nondimensional flapwise and torsional vibration modes coupled with the aerodynamic model based on the Theodorsen theory and aerodynamic effects of the microtab based on the wind tunnel experimental data. The aeroelastic model is validated using numerical data available in the literature and then utilized to analyze the microtab control capability on flutter instability case and divergence instabilitymore » case. The effectiveness of the microtab is investigated with the scenarios of different output controllers and actuation deployments for both instability cases. The numerical results show that the microtab can effectively suppress both vibration modes with the appropriate choice of the output feedback controller.« less

  6. Comparative exploration of multidimensional flow cytometry software: a model approach evaluating T cell polyfunctional behavior.

    PubMed

    Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E

    2017-08-01

    Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society for Leukocyte Biology.

  7. Analyzing Power Supply and Demand on the ISS

    NASA Technical Reports Server (NTRS)

    Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve

    2006-01-01

    Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.

  8. Metal-Ferroelectric-Semiconductor Field-Effect Transistor NAND Gate Switching Time Analysis

    NASA Technical Reports Server (NTRS)

    Phillips, Thomas A.; Macleod, Todd C.; Ho, Fat D.

    2006-01-01

    Previous research investigated the modeling of a N Wga te constructed of Metal-Ferroelectric- Semiconductor Field-Effect Transistors (MFSFETs) to obtain voltage transfer curves. The NAND gate was modeled using n-channel MFSFETs with positive polarization for the standard CMOS n-channel transistors and n-channel MFSFETs with negative polarization for the standard CMOS p-channel transistors. This paper investigates the MFSFET NAND gate switching time propagation delay, which is one of the other important parameters required to characterize the performance of a logic gate. Initially, the switching time of an inverter circuit was analyzed. The low-to-high and high-to-low propagation time delays were calculated. During the low-to-high transition, the negatively polarized transistor pulls up the output voltage, and during the high-to-low transition, the positively polarized transistor pulls down the output voltage. The MFSFETs were simulated by using a previously developed model which utilized a partitioned ferroelectric layer. Then the switching time of a 2-input NAND gate was analyzed similarly to the inverter gate. Extension of this technique to more complicated logic gates using MFSFETs will be studied.

  9. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  10. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    NASA Astrophysics Data System (ADS)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-09-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  11. Electric Propulsion System Modeling for the Proposed Prometheus 1 Mission

    NASA Technical Reports Server (NTRS)

    Fiehler, Douglas; Dougherty, Ryan; Manzella, David

    2005-01-01

    The proposed Prometheus 1 spacecraft would utilize nuclear electric propulsion to propel the spacecraft to its ultimate destination where it would perform its primary mission. As part of the Prometheus 1 Phase A studies, system models were developed for each of the spacecraft subsystems that were integrated into one overarching system model. The Electric Propulsion System (EPS) model was developed using data from the Prometheus 1 electric propulsion technology development efforts. This EPS model was then used to provide both performance and mass information to the Prometheus 1 system model for total system trades. Development of the EPS model is described, detailing both the performance calculations as well as its evolution over the course of Phase A through three technical baselines. Model outputs are also presented, detailing the performance of the model and its direct relationship to the Prometheus 1 technology development efforts. These EP system model outputs are also analyzed chronologically showing the response of the model development to the four technical baselines during Prometheus 1 Phase A.

  12. Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2015-01-01

    In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.

  13. USEEIO: a New and Transparent United States ...

    EPA Pesticide Factsheets

    National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f

  14. Signals and circuits in the purkinje neuron.

    PubMed

    Abrams, Zéev R; Zhang, Xiang

    2011-01-01

    Purkinje neurons (PN) in the cerebellum have over 100,000 inputs organized in an orthogonal geometry, and a single output channel. As the sole output of the cerebellar cortex layer, their complex firing pattern has been associated with motor control and learning. As such they have been extensively modeled and measured using tools ranging from electrophysiology and neuroanatomy, to dynamic systems and artificial intelligence methods. However, there is an alternative approach to analyze and describe the neuronal output of these cells using concepts from electrical engineering, particularly signal processing and digital/analog circuits. By viewing the PN as an unknown circuit to be reverse-engineered, we can use the tools that provide the foundations of today's integrated circuits and communication systems to analyze the Purkinje system at the circuit level. We use Fourier transforms to analyze and isolate the inherent frequency modes in the PN and define three unique frequency ranges associated with the cells' output. Comparing the PN to a signal generator that can be externally modulated adds an entire level of complexity to the functional role of these neurons both in terms of data analysis and information processing, relying on Fourier analysis methods in place of statistical ones. We also re-describe some of the recent literature in the field, using the nomenclature of signal processing. Furthermore, by comparing the experimental data of the past decade with basic electronic circuitry, we can resolve the outstanding controversy in the field, by recognizing that the PN can act as a multivibrator circuit.

  15. Efficiency measurement and the operationalization of hospital production.

    PubMed

    Magnussen, J

    1996-04-01

    To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. Norwegian hospitals over the three-year period 1989-1991. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. Input and output data are collected by the Norwegian Central Bureau of Statistics. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile.

  16. Analysis of tribological behaviour of zirconia reinforced Al-SiC hybrid composites using statistical and artificial neural network technique

    NASA Astrophysics Data System (ADS)

    Arif, Sajjad; Tanwir Alam, Md; Ansari, Akhter H.; Bilal Naim Shaikh, Mohd; Arif Siddiqui, M.

    2018-05-01

    The tribological performance of aluminium hybrid composites reinforced with micro SiC (5 wt%) and nano zirconia (0, 3, 6 and 9 wt%) fabricated through powder metallurgy technique were investigated using statistical and artificial neural network (ANN) approach. The influence of zirconia reinforcement, sliding distance and applied load were analyzed with test based on full factorial design of experiments. Analysis of variance (ANOVA) was used to evaluate the percentage contribution of each process parameters on wear loss. ANOVA approach suggested that wear loss be mainly influenced by sliding distance followed by zirconia reinforcement and applied load. Further, a feed forward back propagation neural network was applied on input/output date for predicting and analyzing the wear behaviour of fabricated composite. A very close correlation between experimental and ANN output were achieved by implementing the model. Finally, ANN model was effectively used to find the influence of various control factors on wear behaviour of hybrid composites.

  17. Reduced-order modeling for hyperthermia control.

    PubMed

    Potocki, J K; Tharp, H S

    1992-12-01

    This paper analyzes the feasibility of using reduced-order modeling techniques in the design of multiple-input, multiple-output (MIMO) hyperthermia temperature controllers. State space thermal models are created based upon a finite difference expansion of the bioheat transfer equation model of a scanned focused ultrasound system (SFUS). These thermal state space models are reduced using the balanced realization technique, and an order reduction criterion is tabulated. Results show that a drastic reduction in model dimension can be achieved using the balanced realization. The reduced-order model is then used to design a reduced-order optimal servomechanism controller for a two-scan input, two thermocouple output tissue model. In addition, a full-order optimal servomechanism controller is designed for comparison and validation purposes. These two controllers are applied to a variety of perturbed tissue thermal models to test the robust nature of the reduced-order controller. A comparison of the two controllers validates the use of open-loop balanced reduced-order models in the design of MIMO hyperthermia controllers.

  18. Restrictive Factors and Output Forecast of Green Development of Agricultural Industry Based on Gray System

    NASA Astrophysics Data System (ADS)

    Sun, Fengru

    2018-01-01

    This paper analyzes the characteristics of agricultural products from the perspective of agricultural production, farmers’ income, adjustment of agricultural structure and environmental improvement, and analyzes the characteristics of agricultural products in LanZhou area. Through data mining and empirical analysis, the regional agriculture (1) forecasting model of gray system with dynamic data processing, combined with the output data of lily in 2004-2003, the yield prediction is predicted and the fitting state is good and the error is small. Finally, combined with the relevant characteristics of the local characteristics of the agricultural industry to make reference, by changing the characteristics of agricultural production as the center of the mindset, and agricultural industrialization and organic combination, take the characteristics of efficient industrialization of agricultural products.

  19. Interference of qubits in pure dephasing and almost pure dephasing environments

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Mierzejewski, Marcin; Dajka, Jerzy

    2015-07-01

    Two-path interference of quantum particles with internal spin (qubits) interacting on one arm of the interferometer with bosonic environment is studied. It is assumed that the energy exchange between the qubit and its environment is either absent, which is a pure dephasing (decoherence) model, or very weak. Both the amplitude and the position of maximum of an output intensity discussed as a function of a phase shift can serve as a quantifier of parameters describing coupling between qubit and its environment. The time evolution of the qubit-environment system is analyzed in the Schrödinger picture and the output intensity for qubit-environment interaction close to pure decoherence is analyzed by means of perturbation theory. Quality of the applied approximation is verified by comparison with numerical results.

  20. Security Analysis of Measurement-Device-Independent Quantum Key Distribution in Collective-Rotation Noisy Environment

    NASA Astrophysics Data System (ADS)

    Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian

    2018-01-01

    Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.

  1. Evaluate and Analysis Efficiency of Safaga Port Using DEA-CCR, BCC and SBM Models-Comparison with DP World Sokhna

    NASA Astrophysics Data System (ADS)

    Elsayed, Ayman; Shabaan Khalil, Nabil

    2017-10-01

    The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.

  2. A Comparison of Neural Networks and Fuzzy Logic Methods for Process Modeling

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Sala, Dorel M.; Berke, Laszlo

    1996-01-01

    The goal of this work was to analyze the potential of neural networks and fuzzy logic methods to develop approximate response surfaces as process modeling, that is for mapping of input into output. Structural response was chosen as an example. Each of the many methods surveyed are explained and the results are presented. Future research directions are also discussed.

  3. [Decomposition model of energy-related carbon emissions in tertiary industry for China].

    PubMed

    Lu, Yuan-Qing; Shi, Jun

    2012-07-01

    Tertiary industry has been developed in recent years. And it is very important to find the factors influenced the energy-related carbon emissions in tertiary industry. A decomposition model of energy-related carbon emissions for China is set up by adopting logarithmic mean weight Divisia method based on the identity of carbon emissions. The model is adopted to analyze the influence of energy structure, energy efficiency, tertiary industry structure and economic output to energy-related carbon emissions in China from 2000 to 2009. Results show that the contribution rate of economic output and energy structure to energy-related carbon emissions increases year by year. Either is the contribution rate of energy efficiency or the tertiary industry restraining to energy-related carbon emissions. However, the restrain effect is weakening.

  4. Modeling and analysis on ring-type piezoelectric transformers.

    PubMed

    Ho, Shine-Tzong

    2007-11-01

    This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.

  5. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  6. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  7. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  8. Dose uniformity analysis among ten 16-slice same-model CT scanners.

    PubMed

    Erdi, Yusuf Emre

    2012-01-01

    With the introduction of multislice scanners, computed tomographic (CT) dose optimization has become important. The patient-absorbed dose may differ among the scanners although they are the same type and model. To investigate the dose output variation of the CT scanners, we designed the study to analyze dose outputs of 10 same-model CT scanners using 3 clinical protocols. Ten GE Lightspeed (GE Healthcare, Waukesha, Wis) 16-slice scanners located at main campus and various satellite locations of our institution have been included in this study. All dose measurements were performed using poly (methyl methacrylate) (PMMA) head (diameter, 16 cm) and body (diameter, 32 cm) phantoms manufactured by Radcal (RadCal Corp, Monrovia, Calif) using a 9095 multipurpose analyzer with 10 × 9-3CT ion chamber both from the same manufacturer. Ion chamber is inserted into the peripheral and central axis locations and volume CT dose index (CTDIvol) is calculated as weighted average of doses at those locations. Three clinical protocol settings for adult head, high-resolution chest, and adult abdomen are used for dose measurements. We have observed up to 9.4% CTDIvol variation for the adult head protocol in which the largest variation occurred among the protocols. However, head protocol uses higher milliampere second values than the other 2 protocols. Most of the measured values were less than the system-stored CTDIvol values. It is important to note that reduction in dose output from tubes as they age is expected in addition to the intrinsic radiation output fluctuations of the same scanner. Although the same model CT scanners were used in this study, it is possible to see CTDIvol variation in standard patient scanning protocols of head, chest, and abdomen. The compound effect of the dose variation may be larger with higher milliampere and multiphase and multilocation CT scans.

  9. Estimates of Embodied Global Energy and Air-Emission Intensities of Japanese Products for Building a Japanese Input–Output Life Cycle Assessment Database with a Global System Boundary

    PubMed Central

    2012-01-01

    To build a life cycle assessment (LCA) database of Japanese products embracing their global supply chains in a manner requiring lower time and labor burdens, this study estimates the intensity of embodied global environmental burden for commodities produced in Japan. The intensity of embodied global environmental burden is a measure of the environmental burden generated globally by unit production of the commodity and can be used as life cycle inventory data in LCA. The calculation employs an input–output LCA method with a global link input–output model that defines a global system boundary grounded in a simplified multiregional input–output framework. As results, the intensities of embodied global environmental burden for 406 Japanese commodities are determined in terms of energy consumption, greenhouse-gas emissions (carbon dioxide, methane, nitrous oxide, perfluorocarbons, hydrofluorocarbons, sulfur hexafluoride, and their summation), and air-pollutant emissions (nitrogen oxide and sulfur oxide). The uncertainties in the intensities of embodied global environmental burden attributable to the simplified structure of the global link input–output model are quantified using Monte Carlo simulation. In addition, by analyzing the structure of the embodied global greenhouse-gas intensities we characterize Japanese commodities in the context of LCA embracing global supply chains. PMID:22881452

  10. pong: fast analysis and visualization of latent clusters in population genetic data.

    PubMed

    Behr, Aaron A; Liu, Katherine Z; Liu-Fang, Gracie; Nakka, Priyanka; Ramachandran, Sohini

    2016-09-15

    A series of methods in population genetics use multilocus genotype data to assign individuals membership in latent clusters. These methods belong to a broad class of mixed-membership models, such as latent Dirichlet allocation used to analyze text corpora. Inference from mixed-membership models can produce different output matrices when repeatedly applied to the same inputs, and the number of latent clusters is a parameter that is often varied in the analysis pipeline. For these reasons, quantifying, visualizing, and annotating the output from mixed-membership models are bottlenecks for investigators across multiple disciplines from ecology to text data mining. We introduce pong, a network-graphical approach for analyzing and visualizing membership in latent clusters with a native interactive D3.js visualization. pong leverages efficient algorithms for solving the Assignment Problem to dramatically reduce runtime while increasing accuracy compared with other methods that process output from mixed-membership models. We apply pong to 225 705 unlinked genome-wide single-nucleotide variants from 2426 unrelated individuals in the 1000 Genomes Project, and identify previously overlooked aspects of global human population structure. We show that pong outpaces current solutions by more than an order of magnitude in runtime while providing a customizable and interactive visualization of population structure that is more accurate than those produced by current tools. pong is freely available and can be installed using the Python package management system pip. pong's source code is available at https://github.com/abehr/pong aaron_behr@alumni.brown.edu or sramachandran@brown.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Utilizing Physical Input-Output Model to Inform Nitrogen related Ecosystem Services

    EPA Science Inventory

    Here we describe the development of nitrogen PIOTs for the midwestern US state of Illinois with large inputs of nitrogen from agriculture and industry. The PIOTs are used to analyze the relationship between regional economic activities and ecosystem services in order to identify...

  12. A Comparison of Software Schedule Estimators

    DTIC Science & Technology

    1990-09-01

    SLIM ...................................... 33 SPQR /20 ................................... 35 System -4 .................................... 37 Previous...24 3. PRICE-S Outputs ..................................... 26 4. COCOMO Factors by Category ........................... 28 5. SPQR /20 Activities...actual schedules experienced on the projects. The models analyzed were REVIC, PRICE-S, System-4, SPQR /20, and SEER. ix A COMPARISON OF SOFTWARE

  13. Appliance of Independent Component Analysis to System Intrusion Analysis

    NASA Astrophysics Data System (ADS)

    Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji

    In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.

  14. Power combining in an array of microwave power rectifiers

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1979-01-01

    This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.

  15. Data-based virtual unmodeled dynamics driven multivariable nonlinear adaptive switching control.

    PubMed

    Chai, Tianyou; Zhang, Yajun; Wang, Hong; Su, Chun-Yi; Sun, Jing

    2011-12-01

    For a complex industrial system, its multivariable and nonlinear nature generally make it very difficult, if not impossible, to obtain an accurate model, especially when the model structure is unknown. The control of this class of complex systems is difficult to handle by the traditional controller designs around their operating points. This paper, however, explores the concepts of controller-driven model and virtual unmodeled dynamics to propose a new design framework. The design consists of two controllers with distinct functions. First, using input and output data, a self-tuning controller is constructed based on a linear controller-driven model. Then the output signals of the controller-driven model are compared with the true outputs of the system to produce so-called virtual unmodeled dynamics. Based on the compensator of the virtual unmodeled dynamics, the second controller based on a nonlinear controller-driven model is proposed. Those two controllers are integrated by an adaptive switching control algorithm to take advantage of their complementary features: one offers stabilization function and another provides improved performance. The conditions on the stability and convergence of the closed-loop system are analyzed. Both simulation and experimental tests on a heavily coupled nonlinear twin-tank system are carried out to confirm the effectiveness of the proposed method.

  16. The Sylview graphical interface to the SYLVAN STAND STRUCTURE model with examples from southern bottomland hardwood forests

    Treesearch

    David R. Larsen; Ian Scott

    2010-01-01

    In the field of forestry, the output of forest growth models provide a wealth of detailed information that can often be difficult to analyze and perceive due to presentation either as plain text summary tables or static stand visualizations. This paper describes the design and implementation of a cross-platform computer application for dynamic and interactive forest...

  17. Assessing the impacts of climate change in Mediterranean catchments under conditions of data scarcity

    NASA Astrophysics Data System (ADS)

    Meyer, Swen; Ludwig, Ralf

    2013-04-01

    According to current climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. While there is scientific consensus that climate induced changes on the hydrology of Mediterranean regions are presently occurring and are projected to amplify in the future, very little knowledge is available about the quantification of these changes, which is hampered by a lack of suitable and cost effective hydrological monitoring and modeling systems. The European FP7-project CLIMB is aiming to analyze climate induced changes on the hydrology of the Mediterranean Basins by investigating 7 test sites located in the countries Italy, France, Turkey, Tunisia, Gaza and Egypt. CLIMB employs a combination of novel geophysical field monitoring concepts, remote sensing techniques and integrated hydrologic modeling to improve process descriptions and understanding and to quantify existing uncertainties in climate change impact analysis. The Rio Mannu Basin, located in Sardinia; Italy, is one test site of the CLIMB project. The catchment has a size of 472.5 km2, it ranges from 62 to 946 meters in elevation, at mean annual temperatures of 16°C and precipitation of about 700 mm, the annual runoff volume is about 200 mm. The physically based Water Simulation Model WaSiM Vers. 2 (Schulla & Jasper (1999)) was setup to model current and projected future hydrological conditions. The availability of measured meteorological and hydrological data is poor as common to many Mediterranean catchments. The lack of available measured input data hampers the calibration of the model setup and the validation of model outputs. State of the art remote sensing techniques and field measuring techniques were applied to improve the quality of hydrological input parameters. In a field campaign about 250 soil samples were collected and lab-analyzed. Different geostatistical regionalization methods were tested to improve the model setup. The soil parameterization of the model was tested against publically available soil data. Results show a significant improvement of modeled soil moisture outputs. To validate WaSiMs evapotranspiration (ETact) outputs, Landsat TM images were used to calculate the actual monthly mean ETact rates using the triangle method (Jiang and Islam, 1999). Simulated spatial ETact patterns and those derived from remote sensing show a good fit especially for the growing season. WaSiM was driven with the meteorological forcing taken from 4 different ENSEMBLES climate projections for a reference (1971-2000) and a future (2041-2070) times series. Output results were analyzed for climate induced changes on selected hydrological variables. While the climate projections reveal increased precipitation rates in the spring season, first simulation results show an earlier onset and an increased duration of the dry season, imposing an increased irrigation demand and higher vulnerability of agricultural productivity.

  18. The Watershed Deposition Tool: A Tool for Incorporating Atmospheric Deposition in Watershed Analysis

    EPA Science Inventory

    The tool for providing the linkage between air and water quality modeling needed for determining the Total Maximum Daily Load (TMDL) and for analyzing related nonpoint-source impacts on watersheds has been developed. The Watershed Deposition Tool (WDT) takes gridded output of at...

  19. Analyzing the impact of the Firefly Trail on economic development in northeast Georgia : final report.

    DOT National Transportation Integrated Search

    2016-10-01

    This research report contains the findings of the analysis undertaken to measure the economic impact of the proposed Firefly Trail on the local economy. An input-output model was constructed to study the economic impact of the project on the local ec...

  20. Information visualization of the minority game

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Herbert, R. D.; Webber, R.

    2008-02-01

    Many dynamical systems produce large quantities of data. How can the system be understood from the output data? Often people are simply overwhelmed by the data. Traditional tools such as tables and plots are often not adequate, and new techniques are needed to help people to analyze the system. In this paper, we propose the use of two spacefilling visualization tools to examine the output from a complex agent-based financial model. We measure the effectiveness and performance of these tools through usability experiments. Based on the experimental results, we develop two new visualization techniques that combine the advantages and discard the disadvantages of the information visualization tools. The model we use is an evolutionary version of the Minority Game which simulates a financial market.

  1. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  2. Multi-criterion model ensemble of CMIP5 surface air temperature over China

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming

    2018-05-01

    The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.

  3. Current Situation Survey of Garbage Management in rural areas of Heilongjiang province

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Wang, Shuai; Zhao, Yufeng

    2018-03-01

    This paper makes investigation to 120 administrative villages, count the the output, the distribution characteristics, the composition and the treatment model of the rural garbage at this stage. The research shows that the composition of the rural garbage is very complicated, the total annual output of rural garbage is 5 295 600 tonnes, and the daily output per capital of household garbage is 0.8925 kg. According to the situation of Heiliongjiang Province, this paper analyze the main problem during the garbage disposal, some control measures are presented, this reaesrch could provide basic data and research for the following treatment. The significant new findings of the research is that the rational governance path of garbage is that, first classification, second recycling and third harmless treatment.

  4. Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.

    PubMed

    Okubo, T; Shibata, H; Takishima, T

    1983-07-01

    By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.

  5. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less

  6. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  7. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  8. Evaluation of Oxygen Concentrators and Chemical Oxygen Generators at Altitude and Temperature Extremes

    DTIC Science & Technology

    2015-04-22

    ceased. Oxygen concentration was continuously measured with a fast laser diode oxygen analyzer (O2CAP, Oxigraf, Inc., Mountain View, CA) throughout the...duration of operation. The output generated from the COGs was analyzed by a gas mass spectrometer (QGA model HAS 301, Hiden Analytical, Livonia, MI...throughout the range of bolus volumes with each device at respiratory rates of 20 and 30 breaths /min with each bolus setting. Data were recorded every

  9. A general method to analyze the thermal performance of multi-cavity concentrating solar power receivers

    DOE PAGES

    Fleming, Austin; Folsom, Charles; Ban, Heng; ...

    2015-11-13

    Concentrating solar power (CSP) with thermal energy storage has potential to provide grid-scale, on-demand, dispatachable renewable energy. As higher solar receiver output temperatures are necessary for higher thermal cycle efficiency, current CSP research is focused on high outlet temperature and high efficiency receivers. Here, the objective of this study is to provide a simplified model to analyze the thermal efficiency of multi-cavity concentrating solar power receivers.

  10. The future of climate science analysis in a coming era of exascale computing

    NASA Astrophysics Data System (ADS)

    Bates, S. C.; Strand, G.

    2013-12-01

    Projections of Community Earth System Model (CESM) output based on the growth of data archived over 2000-2012 at all of our computing sites (NCAR, NERSC, ORNL) show that we can expect to reach 1,000 PB (1 EB) sometime in the next decade or so. The current paradigms of using site-based archival systems to hold these data that are then accessed via portals or gateways, downloading the data to a local system, and then processing/analyzing the data will be irretrievably broken before then. From a climate modeling perspective, the expertise involved in making climate models themselves efficient on HPC systems will need to be applied to the data as well - providing fast parallel analysis tools co-resident in memory with the data, because the disk I/O bandwidth simply will not keep up with the expected arrival of exaflop systems. The ability of scientists, analysts, stakeholders and others to use climate model output to turn these data into understanding and knowledge will require significant advances in the current typical analysis tools and packages to enable these processes for these vast volumes of data. Allowing data users to enact their own analyses on model output is virtually a requirement as well - climate modelers cannot anticipate all the possibilities for analysis that users may want to do. In addition, the expertise of data scientists, and their knowledge of the model output and their knowledge of best practices in data management (metadata, curation, provenance and so on) will need to be rewarded and exploited to gain the most understanding possible from these volumes of data. In response to growing data size, demand, and future projections, the CESM output has undergone a structure evolution and the data management plan has been reevaluated and updated. The major evolution of the CESM data structure is presented here, along with the CESM experience and role within the CMIP3/CMIP5.

  11. On the of neural modeling of some dynamic parameters of earthquakes and fire safety in high-rise construction

    NASA Astrophysics Data System (ADS)

    Haritonova, Larisa

    2018-03-01

    The recent change in the correlation of the number of man-made and natural catastrophes is presented in the paper. Some recommendations are proposed to increase the firefighting efficiency in the high-rise buildings. The article analyzes the methodology of modeling seismic effects. The prospectivity of applying the neural modeling and artificial neural networks to analyze a such dynamic parameters of the earthquake foci as the value of dislocation (or the average rupture slip) is shown. The following two input signals were used: the power class and the number of earthquakes. The regression analysis has been carried out for the predicted results and the target outputs. The equations of the regression for the outputs and target are presented in the work as well as the correlation coefficients in training, validation, testing, and the total (All) for the network structure 2-5-5-1for the average rupture slip. The application of the results obtained in the article for the seismic design for the newly constructed buildings and structures and the given recommendations will provide the additional protection from fire and earthquake risks, reduction of their negative economic and environmental consequences.

  12. Complex Dynamics in a Triopoly Game with Multiple Delays in the Competition of Green Product Level

    NASA Astrophysics Data System (ADS)

    Si, Fengshan; Ma, Junhai

    Research on the output game behavior of oligopoly has greatly advanced in recent years. But many unknowns remain, particularly the influence of consumers’ willingness to buy green products on the oligopoly output game. This paper constructs a triopoly output game model with multiple delays in the competition of green products. The influence of the parameters on the stability and complexity of the system is studied by analyzing the existence and local asymptotic stability of the equilibrium point. It is found that the system loses stability and increases complexity if delay parameters exceed a certain range. In the unstable or chaotic game market, the decisions of oligopoly will be counterproductive. It is also observed that the influence of weight and output adjustment speed on the firm itself is obviously stronger than the influence of other firms. In addition, it is important that weight and output adjustment speed cannot increase indefinitely, otherwise it will bring unnecessary losses to the firm. Finally, chaos control is realized by using the variable feedback control method. The research results of this paper can provide a reference for decision-making for the output of the game of oligopoly.

  13. A framework to analyze emissions implications of ...

    EPA Pesticide Factsheets

    Future year emissions depend highly on the evolution of the economy, technology and current and future regulatory drivers. A scenario framework was adopted to analyze various technology development pathways and societal change while considering existing regulations and future uncertainty in regulations and evaluate resulting emissions growth patterns. The framework integrates EPA’s energy systems model with an economic Input-Output (I/O) Life Cycle Assessment model. The EPAUS9r MARKAL database is assembled from a set of technologies to represent the U.S. energy system within MARKAL bottom-up technology rich energy modeling framework. The general state of the economy and consequent demands for goods and services from these sectors are taken exogenously in MARKAL. It is important to characterize exogenous inputs about the economy to appropriately represent the industrial sector outlook for each of the scenarios and case studies evaluated. An economic input-output (I/O) model of the US economy is constructed to link up with MARKAL. The I/O model enables user to change input requirements (e.g. energy intensity) for different sectors or the share of consumer income expended on a given good. This gives end-users a mechanism for modeling change in the two dimensions of technological progress and consumer preferences that define the future scenarios. The framework will then be extended to include environmental I/O framework to track life cycle emissions associated

  14. Efficiency measurement and the operationalization of hospital production.

    PubMed Central

    Magnussen, J

    1996-01-01

    OBJECTIVE. To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. STUDY SETTING. Norwegian hospitals over the three-year period 1989-1991. STUDY DESIGN. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. DATA COLLECTION. Input and output data are collected by the Norwegian Central Bureau of Statistics. PRINCIPAL FINDINGS. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. CONCLUSION. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile. PMID:8617607

  15. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  16. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  17. Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.

    PubMed

    Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen

    2013-01-01

    We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.

  18. An optimal design of magnetostrictive material (MsM) based energy harvester

    NASA Astrophysics Data System (ADS)

    Hu, Jingzhen; Yuan, Fuh-Gwo; Xu, Fujun; Huang, Alex Q.

    2010-04-01

    In this study, an optimal vibration-based energy harvesting system using magnetostrictive material (MsM) has been designed to power the Wireless Intelligent Sensor Platform (WISP), developed at North Carolina State University. A linear MsM energy harvesting device has been modeled and optimized to maximize the power output. The effects of number of MsM layers and glue layers, and load matching on the output power of the MsM energy harvester have been analyzed. From the measurement, the open circuit voltage can reach 1.5 V when the MsM cantilever beam operates at the 2nd natural frequency 324 Hz. The AC output power is 0.97 mW, giving power density 279 μW/cm3. Since the MsM device has low open circuit output voltage characteristics, a full-wave quadrupler has been designed to boost the rectified output voltage. To deliver the maximum output power to the load, a complex conjugate impedance matching between the load and the MsM device has been implemented using a discontinuous conduction mode (DCM) buck-boost converter. The maximum output power after the voltage quadrupler is now 705 μW and power density reduces to 202.4 μW/cm3, which is comparable to the piezoelectric energy harvesters given in the literature. The output power delivered to a lithium rechargeable battery is around 630 μW, independent of the load resistance.

  19. Economic effects of state park recreation in Pennsylvania

    Treesearch

    Charles H. Strauss; Bruce E. Lord

    1992-01-01

    The economic effects resulting from the use and operation of Pennsylvania's state park system were analyzed with an input-output model of the state's economy. Direct expenditures by park users and park operations were estimated at $263 million for the 1987 study year. Secondary effects, stemming from interindustry trade and recreation-related employment,...

  20. Modeling and advanced sliding mode controls of crawler cranes considering wire rope elasticity and complicated operations

    NASA Astrophysics Data System (ADS)

    Tuan, Le Anh; Lee, Soon-Geul

    2018-03-01

    In this study, a new mathematical model of crawler cranes is developed for heavy working conditions, with payload-lifting and boom-hoisting motions simultaneously activated. The system model is built with full consideration of wind disturbances, geometrical nonlinearities, and cable elasticities of cargo lifting and boom luffing. On the basis of this dynamic model, three versions of sliding mode control are analyzed and designed to control five system outputs with only two inputs. When used in complicated operations, the effectiveness of the controllers is analyzed using analytical investigation and numerical simulation. Results indicate the effectiveness of the control algorithms and the proposed dynamic model. The control algorithms asymptotically stabilize the system with finite-time convergences, remaining robust amid disturbances and parametric uncertainties.

  1. Measuring Efficiency of Knowledge Production in Health Research Centers Using Data Envelopment Analysis (DEA): A Case Study in Iran.

    PubMed

    Amiri, Mohammad Meskarpour; Nasiri, Taha; Saadat, Seyed Hassan; Anabad, Hosein Amini; Ardakan, Payman Mahboobi

    2016-11-01

    Efficiency analysis is necessary in order to avoid waste of materials, energy, effort, money, and time during scientific research. Therefore, analyzing efficiency of knowledge production in health areas is necessary, especially for developing and in-transition countries. As the first step in this field, the aim of this study was the analysis of selected health research center efficiency using data envelopment analysis (DEA). This retrospective and applied study was conducted in 2015 using input and output data of 16 health research centers affiliated with a health sciences university in Iran during 2010-2014. The technical efficiency of health research centers was evaluated based on three basic data envelopment analysis (DEA) models: input-oriented, output-oriented, and hyperbolic-oriented. The input and output data of each health research center for years 2010-2014 were collected from the Iran Ministry of Health and Medical Education (MOHE) profile and analyzed by R software. The mean efficiency score in input-oriented, output-oriented, and hyperbolic-oriented models was 0.781, 0.671, and 0.798, respectively. Based on results of the study, half of the health research centers are operating below full efficiency, and about one-third of them are operating under the average efficiency level. There is also a large gap between health research center efficiency relative to each other. It is necessary for health research centers to improve their efficiency in knowledge production through better management of available resources. The higher level of efficiency in a significant number of health research centers is achievable through more efficient management of human resources and capital. Further research is needed to measure and follow the efficiency of knowledge production by health research centers around the world and over a period of time.

  2. Diagnosis and Quantification of Climatic Sensitivity of Carbon Fluxes in Ensemble Global Ecosystem Models

    NASA Astrophysics Data System (ADS)

    Wang, W.; Hashimoto, H.; Milesi, C.; Nemani, R. R.; Myneni, R.

    2011-12-01

    Terrestrial ecosystem models are primary scientific tools to extrapolate our understanding of ecosystem functioning from point observations to global scales as well as from the past climatic conditions into the future. However, no model is nearly perfect and there are often considerable structural uncertainties existing between different models. Ensemble model experiments thus become a mainstream approach in evaluating the current status of global carbon cycle and predicting its future changes. A key task in such applications is to quantify the sensitivity of the simulated carbon fluxes to climate variations and changes. Here we develop a systematic framework to address this question solely by analyzing the inputs and the outputs from the models. The principle of our approach is to assume the long-term (~30 years) average of the inputs/outputs as a quasi-equlibrium of the climate-vegetation system while treat the anomalies of carbon fluxes as responses to climatic disturbances. In this way, the corresponding relationships can be largely linearized and analyzed using conventional time-series techniques. This method is used to characterize three major aspects of the vegetation models that are mostly important to global carbon cycle, namely the primary production, the biomass dynamics, and the ecosystem respiration. We apply this analytical framework to quantify the climatic sensitivity of an ensemble of models including CASA, Biome-BGC, LPJ as well as several other DGVMs from previous studies, all driven by the CRU-NCEP climate dataset. The detailed analysis results are reported in this study.

  3. Quantum Otto heat engine with three-qubit XXZ model as working substance

    NASA Astrophysics Data System (ADS)

    Huang, X. L.; Sun, Qi; Guo, D. Y.; Yu, Qian

    2018-02-01

    A quantum Otto heat engine is established with a three-qubit Heisenberg XXZ model with Dzyaloshinskii-Moriya (DM) interaction under a homogeneous magnetic field as the working substance. The quantum Otto engine is composed of two quantum isochoric processes and two quantum adiabatic processes. Here we have restricted Bc /Bh =Jc /Jh = r in the two adiabatic processes, where r is the adiabatic compression ratio. The work output and efficiency are calculated for our cycle. The possible adiabatic compression ratios and the ratios of work output between our working substance and a single spin under the same external conditions in the Otto cycle are analyzed with different DM interaction parameters and anisotropic parameters. The effects of pairwise entanglements on the heat engine efficiency are discussed.

  4. Design and test hardware for a solar array switching unit

    NASA Technical Reports Server (NTRS)

    Patil, A. R.; Cho, B. H.; Sable, D.; Lee, F. C.

    1992-01-01

    This paper describes the control of a pulse width modulated (PWM) type sequential shunt switching unit (SSU) for spacecraft applications. It is found that the solar cell output capacitance has a significant impact on SSU design. Shorting of this cell capacitance by the PWM switch causes input current surges. These surges are minimized by the use of a series filter inductor. The system with a filter is analyzed for ripple and the control to output-voltage transfer function. Stable closed loop design considerations are discussed. The results are supported by modeling and measurements of loop gain and of closed-loop bus impedance on test hardware for NASA's 120 V Earth Observation System (EOS). The analysis and modeling are also applicable to NASA's 160 V Space Station power system.

  5. On cup anemometer rotor aerodynamics.

    PubMed

    Pindado, Santiago; Pérez, Javier; Avila-Sanchez, Sergio

    2012-01-01

    The influence of anemometer rotor shape parameters, such as the cups' front area or their center rotation radius on the anemometer's performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups' center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor's cup.

  6. Spartan Release Engagement Mechanism (REM) stress and fracture analysis

    NASA Technical Reports Server (NTRS)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    The revised stress and fracture analysis of the Spartan REM hardware for current load conditions and mass properties is presented. The stress analysis was performed using a NASTRAN math model of the Spartan REM adapter, base, and payload. Appendix A contains the material properties, loads, and stress analysis of the hardware. The computer output and model description are in Appendix B. Factors of safety used in the stress analysis were 1.4 on tested items and 2.0 on all other items. Fracture analysis of the items considered fracture critical was accomplished using the MSFC Crack Growth Analysis code. Loads and stresses were obtaind from the stress analysis. The fracture analysis notes are located in Appendix A and the computer output in Appendix B. All items analyzed met design and fracture criteria.

  7. Evaluation of a Postdischarge Call System Using the Logic Model.

    PubMed

    Frye, Timothy C; Poe, Terri L; Wilson, Marisa L; Milligan, Gary

    2018-02-01

    This mixed-method study was conducted to evaluate a postdischarge call program for congestive heart failure patients at a major teaching hospital in the southeastern United States. The program was implemented based on the premise that it would improve patient outcomes and overall quality of life, but it had never been evaluated for effectiveness. The Logic Model was used to evaluate the input of key staff members to determine whether the outputs and results of the program matched the expectations of the organization. Interviews, online surveys, reviews of existing patient outcome data, and reviews of publicly available program marketing materials were used to ascertain current program output. After analyzing both qualitative and quantitative data from the evaluation, recommendations were made to the organization to improve the effectiveness of the program.

  8. Theoretical evaluation of a continues-wave Ho3+:BaY2F8 laser with mid-infrared emission

    NASA Astrophysics Data System (ADS)

    Rong, Kepeng; Cai, He; An, Guofei; Han, Juhong; Yu, Hang; Wang, Shunyan; Yu, Qiang; Wu, Peng; Zhang, Wei; Wang, Hongyuan; Wang, You

    2018-01-01

    In this paper, we build a theoretical model to study a continues-wave (CW) Ho3+:BaY2F8 laser by considering both energy transfer up-conversion (ETU) and cross relaxation (CR) processes. The influences of the pump power, reflectance of an output coupler (OC), and crystal length on the output features are systematically analyzed for an end-pumped configuration, respectively. We also investigate how the processes of ETU and CR in the energy-level system affect the output of a Ho3+:BaY2F8 laser by use of the kinetic evaluation. The simulation results show that the optical-to-optical efficiency can be promoted by adjusting the parameters such as the reflectance of an output coupler, crystal length, and pump power. It has been theoretically demonstrated that the threshold of a Ho3+:BaY2F8 laser is very high for the lasing operation in a CW mode.

  9. ICAN: Integrated composites analyzer

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Chamis, C. C.

    1984-01-01

    The ICAN computer program performs all the essential aspects of mechanics/analysis/design of multilayered fiber composites. Modular, open-ended and user friendly, the program can handle a variety of composite systems having one type of fiber and one matrix as constituents as well as intraply and interply hybrid composite systems. It can also simulate isotropic layers by considering a primary composite system with negligible fiber volume content. This feature is specifically useful in modeling thin interply matrix layers. Hygrothermal conditions and various combinations of in-plane and bending loads can also be considered. Usage of this code is illustrated with a sample input and the generated output. Some key features of output are stress concentration factors around a circular hole, locations of probable delamination, a summary of the laminate failure stress analysis, free edge stresses, microstresses and ply stress/strain influence coefficients. These features make ICAN a powerful, cost-effective tool to analyze/design fiber composite structures and components.

  10. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  11. Jupyter meets Earth: Creating Comprehensible and Reproducible Scientific Workflows with Jupyter Notebooks and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Erickson, T.

    2016-12-01

    Deriving actionable information from Earth observation data obtained from sensors or models can be quite complicated, and sharing those insights with others in a form that they can understand, reproduce, and improve upon is equally difficult. Journal articles, even if digital, commonly present just a summary of an analysis that cannot be understood in depth or reproduced without major effort on the part of the reader. Here we show a method of improving scientific literacy by pairing a recently developed scientific presentation technology (Jupyter Notebooks) with a petabyte-scale platform for accessing and analyzing Earth observation and model data (Google Earth Engine). Jupyter Notebooks are interactive web documents that mix live code with annotations such as rich-text markup, equations, images, videos, hyperlinks and dynamic output. Notebooks were first introduced as part of the IPython project in 2011, and have since gained wide acceptance in the scientific programming community, initially among Python programmers but later by a wide range of scientific programming languages. While Jupyter Notebooks have been widely adopted for general data analysis, data visualization, and machine learning, to date there have been relatively few examples of using Jupyter Notebooks to analyze geospatial datasets. Google Earth Engine is cloud-based platform for analyzing geospatial data, such as satellite remote sensing imagery and/or Earth system model output. Through its Python API, Earth Engine makes petabytes of Earth observation data accessible, and provides hundreds of algorithmic building blocks that can be chained together to produce high-level algorithms and outputs in real-time. We anticipate that this technology pairing will facilitate a better way of creating, documenting, and sharing complex analyses that derive information on our Earth that can be used to promote broader understanding of the complex issues that it faces. http://jupyter.orghttps://earthengine.google.com

  12. Experimental research on the stability and the multilongitudinal mode interference of bidirectional outputs of LD-pumped solid state ring laser

    NASA Astrophysics Data System (ADS)

    Wan, Shunping; Tian, Qian; Sun, Liqun; Yao, Minyan; Mao, Xianhui; Qiu, Hongyun

    2004-05-01

    This paper reports an experimental research on the stability of bidirectional outputs and multi-longitudinal mode interference of laser diode end-pumped Nd:YVO4 solid-state ring laser (DPSSL). The bidirectional, multi-longitudinal and TEM00 mode continuous wave outputs are obtained and the output powers are measured and their stabilities are analyzed respectively. The spectral characteristic of the outputs is measured. The interfering pattern of the bidirectional longitudinal mode outputs is obtained and analyzed in the condition of the ring cavity with rotation velocity. The movement of the interfering fringe of the multi-longitudinal modes is very sensitive to the deformation of the setup base and the fluctuation of the intracavity air, but is stationary or randomly dithers when the stage is rotating.

  13. Atmospheric model development in support of SEASAT. Volume 2: Analysis models

    NASA Technical Reports Server (NTRS)

    Langland, R. A.

    1977-01-01

    As part of the SEASAT program of NASA, two sets of analysis programs were developed for the Jet Propulsion Laboratory. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. The analysis output is used to initialize the primitive equation forecast models.

  14. User's guide for a computer program to analyze the LRC 16 ft transonic dynamics tunnel cable mount system

    NASA Technical Reports Server (NTRS)

    Barbero, P.; Chin, J.

    1973-01-01

    The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.

  15. On the Predictability of Northeast Monsoon Rainfall over South Peninsular India in General Circulation Models

    NASA Astrophysics Data System (ADS)

    Nair, Archana; Acharya, Nachiketa; Singh, Ankita; Mohanty, U. C.; Panda, T. C.

    2013-11-01

    In this study the predictability of northeast monsoon (Oct-Nov-Dec) rainfall over peninsular India by eight general circulation model (GCM) outputs was analyzed. These GCM outputs (forecasts for the whole season issued in September) were compared with high-resolution observed gridded rainfall data obtained from the India Meteorological Department for the period 1982-2010. Rainfall, interannual variability (IAV), correlation coefficients, and index of agreement were examined for the outputs of eight GCMs and compared with observation. It was found that the models are able to reproduce rainfall and IAV to different extents. The predictive power of GCMs was also judged by determining the signal-to-noise ratio and the external error variance; it was noted that the predictive power of the models was usually very low. To examine dominant modes of interannual variability, empirical orthogonal function (EOF) analysis was also conducted. EOF analysis of the models revealed they were capable of representing the observed precipitation variability to some extent. The teleconnection between the sea surface temperature (SST) and northeast monsoon rainfall was also investigated and results suggest that during OND the SST over the equatorial Indian Ocean, the Bay of Bengal, the central Pacific Ocean (over Nino3 region), and the north and south Atlantic Ocean enhances northeast monsoon rainfall. This observed phenomenon is only predicted by the CCM3v6 model.

  16. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  17. A fault injection experiment using the AIRLAB Diagnostic Emulation Facility

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Mangum, Scott; Scheper, Charlotte

    1988-01-01

    The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.

  18. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise.

    PubMed

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  19. Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise

    NASA Astrophysics Data System (ADS)

    Mankin, Romi; Rekker, Astrid

    2016-12-01

    The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.

  20. A Novel Estimator for the Rate of Information Transfer by Continuous Signals

    PubMed Central

    Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko

    2011-01-01

    The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562

  1. Simulation of a Radio-Frequency Photogun for the Generation of Ultrashort Beams

    NASA Astrophysics Data System (ADS)

    Nikiforov, D. A.; Levichev, A. E.; Barnyakov, A. M.; Andrianov, A. V.; Samoilov, S. L.

    2018-04-01

    A radio-frequency photogun for the generation of ultrashort electron beams to be used in fast electron diffractoscopy, wakefield acceleration experiments, and the design of accelerating structures of the millimeter range is modeled. The beam parameters at the photogun output needed for each type of experiment are determined. The general outline of the photogun is given, its electrodynamic parameters are calculated, and the accelerating field distribution is obtained. The particle dynamics is analyzed in the context of the required output beam parameters. The optimal initial beam characteristics and field amplitudes are chosen. A conclusion is made regarding the obtained beam parameters.

  2. Modal Parameter Identification of a Flexible Arm System

    NASA Technical Reports Server (NTRS)

    Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard

    1998-01-01

    In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.

  3. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  4. Volterra-series-based nonlinear system modeling and its engineering applications: A state-of-the-art review

    NASA Astrophysics Data System (ADS)

    Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.

    2017-03-01

    Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.

  5. Analysis of current density and specific absorption rate in biological tissue surrounding transcutaneous transformer for an artificial heart.

    PubMed

    Shiba, Kenji; Nukaya, Masayuki; Tsuji, Toshio; Koshiji, Kohji

    2008-01-01

    This paper reports on the current density and specific absorption rate (SAR) analysis of biological tissue surrounding an air-core transcutaneous transformer for an artificial heart. The electromagnetic field in the biological tissue is analyzed by the transmission line modeling method, and the current density and SAR as a function of frequency, output voltage, output power, and coil dimension are calculated. The biological tissue of the model has three layers including the skin, fat, and muscle. The results of simulation analysis show SARs to be very small at any given transmission conditions, about 2-14 mW/kg, compared to the basic restrictions of the International Commission on nonionizing radiation protection (ICNIRP; 2 W/kg), while the current density divided by the ICNIRP's basic restrictions gets smaller as the frequency rises and the output voltage falls. It is possible to transfer energy below the ICNIRP's basic restrictions when the frequency is over 250 kHz and the output voltage is under 24 V. Also, the parts of the biological tissue that maximized the current density differ by frequencies; in the low frequency is muscle and in the high frequency is skin. The boundary is in the vicinity of the frequency 600-1000 kHz.

  6. Design and experiment of vehicular charger AC/DC system based on predictive control algorithm

    NASA Astrophysics Data System (ADS)

    He, Guangbi; Quan, Shuhai; Lu, Yuzhang

    2018-06-01

    For the car charging stage rectifier uncontrollable system, this paper proposes a predictive control algorithm of DC/DC converter based on the prediction model, established by the state space average method and its prediction model, obtained by the optimal mathematical description of mathematical calculation, to analysis prediction algorithm by Simulink simulation. The design of the structure of the car charging, at the request of the rated output power and output voltage adjustable control circuit, the first stage is the three-phase uncontrolled rectifier DC voltage Ud through the filter capacitor, after by using double-phase interleaved buck-boost circuit with wide range output voltage required value, analyzing its working principle and the the parameters for the design and selection of components. The analysis of current ripple shows that the double staggered parallel connection has the advantages of reducing the output current ripple and reducing the loss. The simulation experiment of the whole charging circuit is carried out by software, and the result is in line with the design requirements of the system. Finally combining the soft with hardware circuit to achieve charging of the system according to the requirements, experimental platform proved the feasibility and effectiveness of the proposed predictive control algorithm based on the car charging of the system, which is consistent with the simulation results.

  7. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  8. Electrode Coverage Optimization for Piezoelectric Energy Harvesting from Tip Excitation

    PubMed Central

    Chen, Guangzhu; Bai, Nan

    2018-01-01

    Piezoelectric energy harvesting using cantilever-type structures has been extensively investigated due to its potential application in providing power supplies for wireless sensor networks, but the low output power has been a bottleneck for its further commercialization. To improve the power conversion capability, a piezoelectric beam with different electrode coverage ratios is studied theoretically and experimentally in this paper. A distributed-parameter theoretical model is established for a bimorph piezoelectric beam with the consideration of the electrode coverage area. The impact of the electrode coverage on the capacitance, the output power and the optimal load resistance are analyzed, showing that the piezoelectric beam has the best performance with an electrode coverage of 66.1%. An experimental study was then carried out to validate the theoretical results using a piezoelectric beam fabricated with segmented electrodes. The experimental results fit well with the theoretical model. A 12% improvement on the Root-Mean-Square (RMS) output power was achieved with the optimized electrode converge ratio (66.1%). This work provides a simple approach to utilizing piezoelectric beams in a more efficient way. PMID:29518934

  9. Validation of Simplified Load Equations through Loads Measurement and Modeling of a Small Horizontal-Axis Wind Turbine Tower; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana, S.; Damiani, R.; vanDam, J.

    As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, NREL tested a small horizontal axis wind turbine in the field at the National Wind Technology Center (NWTC). The test turbine was a 2.1-kW downwind machine mounted on an 18-meter multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the output of an aeroelasticmore » model of the turbine. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads. In this project, we compared fatigue loads as measured in the field, as predicted by the aeroelastic model, and as calculated using the simplified design equations.« less

  10. Noninvasive measurement of pulsatile intracranial pressure using ultrasound

    NASA Technical Reports Server (NTRS)

    Ueno, T.; Ballard, R. E.; Shuer, L. M.; Cantrell, J. H.; Yost, W. T.; Hargens, A. R.

    1998-01-01

    The present study was designed to validate our noninvasive ultrasonic technique (pulse phase locked loop: PPLL) for measuring intracranial pressure (ICP) waveforms. The technique is based upon detecting skull movements which are known to occur in conjunction with altered intracranial pressure. In bench model studies, PPLL output was highly correlated with changes in the distance between a transducer and a reflecting target (R2 = 0.977). In cadaver studies, transcranial distance was measured while pulsations of ICP (amplitudes of zero to 10 mmHg) were generated by rhythmic injections of saline. Frequency analyses (fast Fourier transformation) clearly demonstrate the correspondence between the PPLL output and ICP pulse cycles. Although theoretically there is a slight possibility that changes in the PPLL output are caused by changes in the ultrasonic velocity of brain tissue, the decreased amplitudes of the PPLL output as the external compression of the head was increased indicates that the PPLL output represents substantial skull movement associated with altered ICP. In conclusion, the ultrasound device has sufficient sensitivity to detect transcranial pulsations which occur in association with the cardiac cycle. Our technique makes it possible to analyze ICP waveforms noninvasively and will be helpful for understanding intracranial compliance and cerebrovascular circulation.

  11. A wide-band, high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Wilck, H. C.; Quirk, M. P.; Grimm, M. J.

    1985-01-01

    A million-channel, 20 MHz-bandwidth, digital spectrum analyzer under evelopment for use in the SETI Sky Survey and other applications in the Deep Space Network is described. The analyzer digitizes an analog input, performs a 2(20)-point Radix-2, Fast Fourier Transform, accumulates the output power, and normalizes the output to remove frequency-dependent gain. The effective speed of the real-time hardware is 2.2 GigaFLOPS.

  12. Performance and Feasibility Analysis of a Wind Turbine Power System for Use on Mars

    NASA Technical Reports Server (NTRS)

    Lichter, Matthew D.; Viterna, Larry

    1999-01-01

    A wind turbine power system for future missions to the Martian surface was studied for performance and feasibility. A C++ program was developed from existing FORTRAN code to analyze the power capabilities of wind turbines under different environments and design philosophies. Power output, efficiency, torque, thrust, and other performance criteria could be computed given design geometries, atmospheric conditions, and airfoil behavior. After reviewing performance of such a wind turbine, a conceptual system design was modeled to evaluate feasibility. More analysis code was developed to study and optimize the overall structural design. Findings of this preliminary study show that turbine power output on Mars could be as high as several hundred kilowatts. The optimized conceptual design examined here would have a power output of 104 kW, total mass of 1910 kg, and specific power of 54.6 W/kg.

  13. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  14. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  15. Design and analysis of a MEMS-based bifurcate-shape piezoelectric energy harvester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yuan; Gan, Ruyi, E-mail: 2471390146@qq.com; Wan, Shalang

    This paper presents a novel piezoelectric energy harvester, which is a MEMS-based device. This piezoelectric energy harvester uses a bifurcate-shape. The derivation of the mathematical modeling is based on the Euler-Bernoulli beam theory, and the main mechanical and electrical parameters of this energy harvester are analyzed and simulated. The experiment result shows that the maximum output voltage can achieve 3.3 V under an acceleration of 1 g at 292.11 Hz of frequency, and the output power can be up to 0.155 mW under the load of 0.4 MΩ. The power density is calculated as 496.79 μWmm{sup −3}. Besides that, itmore » is demonstrated efficiently at output power and voltage and adaptively in practical vibration circumstance. This energy harvester could be used for low-power electronic devices.« less

  16. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same through web-browser interfaces. The summer school will serve as a valuable testbed for the tool development, preparing CMDA to serve its target community: Earth-science modeling and model-analysis community.

  17. An Application of Multiplier Analysis in Analyzing the Role of Mining Sectors on Indonesian National Economy

    NASA Astrophysics Data System (ADS)

    Subanti, S.; Hakim, A. R.; Hakim, I. M.

    2018-03-01

    This purpose of the current study aims is to analyze the multiplier analysis on mining sector in Indonesia. The mining sectors defined by coal and metal; crude oil, natural gas, and geothermal; and other mining and quarrying. The multiplier analysis based from input output analysis, this divided by income multiplier and output multiplier. This results show that (1) Indonesian mining sectors ranked 6th with contribute amount of 6.81% on national total output; (2) Based on total gross value added, this sector contribute amount of 12.13% or ranked 4th; (3) The value from income multiplier is 0.7062 and the value from output multiplier is 1.2426.

  18. Toward an in-situ analytics and diagnostics framework for earth system models

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen

    2017-04-01

    The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.

  19. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    NASA Astrophysics Data System (ADS)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.

    Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less

  1. Potential of energy harvesting in barium titanate based laminates from room temperature to cryogenic/high temperatures: measurements and linking phase field and finite element simulations

    NASA Astrophysics Data System (ADS)

    Narita, Fumio; Fox, Marina; Mori, Kotaro; Takeuchi, Hiroki; Kobayashi, Takuya; Omote, Kenji

    2017-11-01

    This paper studies the energy harvesting characteristics of piezoelectric laminates consisting of barium titanate (BaTiO3) and copper (Cu) from room temperature to cryogenic/high temperatures both experimentally and numerically. First, the output voltages of the piezoelectric BaTiO3/Cu laminates were measured from room temperature to a cryogenic temperature (77 K). The output power was evaluated for various values of load resistance. The results showed that the maximum output power density is approximately 2240 nW cm-3. The output voltages of the BaTiO3/Cu laminates were also measured from room temperature to a higher temperature (333 K). To discuss the output voltages of the BaTiO3/Cu laminates due to temperature changes, phase field and finite element simulations were combined. A phase field model for grain growth was used to generate grain structures. The phase field model was then employed for BaTiO3 polycrystals, coupled with the time-dependent Ginzburg-Landau theory and the oxygen vacancies diffusion, to calculate the temperature-dependent piezoelectric coefficient and permittivity. Using these properties, the output voltages of the BaTiO3/Cu laminates from room temperature to both 77 K and 333 K were analyzed by three dimensional finite element methods, and the results are presented for several grain sizes and oxygen vacancy densities. It was found that electricity in the BaTiO3 ceramic layer is generated not only through the piezoelectric effect caused by a thermally induced bending stress but also by the temperature dependence of the BaTiO3 piezoelectric coefficient and permittivity.

  2. Grouping Influences Output Interference in Short-term Memory: A Mixture Modeling Study.

    PubMed

    Kang, Min-Suk; Oh, Byung-Il

    2016-01-01

    Output interference is a source of forgetting induced by recalling. We investigated how grouping influences output interference in short-term memory. In Experiment 1, the participants were asked to remember four colored items. Those items were grouped by temporal coincidence as well as spatial alignment: two items were presented in the first memory array and two were presented in the second, and the items in both arrays were either vertically or horizontally aligned as well. The participants then performed two recall tasks in sequence by selecting a color presented at a cued location from a color wheel. In the same-group condition, the participants reported both items from the same memory array; however, in the different-group condition, the participants reported one item from each memory array. We analyzed participant responses with a mixture model, which yielded two measures: guess rate and precision of recalled memories. The guess rate in the second recall was higher for the different-group condition than for the same-group condition; however, the memory precisions obtained for both conditions were similarly degraded in the second recall. In Experiment 2, we varied the probability of the same- and different-group conditions with a ratio of 3 to 7. We expected output interference to be higher in the same-group condition than in the different-group condition. This is because items of the other group are more likely to be probed in the second recall phase and, thus, protecting those items during the first recall phase leads to a better performance. Nevertheless, the same pattern of results was robustly reproduced, suggesting grouping shields the grouped items from output interference because of the secured accessibility. We discussed how grouping influences output interference.

  3. Dynamic modeling and characteristics analysis of a modal-independent linear ultrasonic motor.

    PubMed

    Li, Xiang; Yao, Zhiyuan; Zhou, Shengli; Lv, Qibao; Liu, Zhen

    2016-12-01

    In this paper, an integrated model is developed to analyze the fundamental characteristics of a modal-independent linear ultrasonic motor with double piezoelectric vibrators. The energy method is used to model the dynamics of the two piezoelectric vibrators. The interface forces are coupled into the dynamic equations of the two vibrators and the moving platform, forming a whole machine model of the motor. The behavior of the force transmission of the motor is analyzed via the resulting model to understand the drive mechanism. In particular, the relative contact length is proposed to describe the intermittent contact characteristic between the stator and the mover, and its role in evaluating motor performance is discussed. The relations between the output speed and various inputs to the motor and the start-stop transients of the motor are analyzed by numerical simulations, which are validated by experiments. Furthermore, the dead-zone behavior is predicted and clarified analytically using the proposed model, which is also observed in experiments. These results are useful for designing servo control scheme for the motor. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Machine learning of frustrated classical spin models. I. Principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  5. Theoretical and experimental analysis of injection seeding a Q-switched alexandrite laser

    NASA Technical Reports Server (NTRS)

    Prasad, C. R.; Lee, H. S.; Glesne, T. R.; Monosmith, B.; Schwemmer, G. K.

    1991-01-01

    Injection seeding is a method for achieving linewidths of less than 500 MHz in the output of broadband, tunable, solid state lasers. Dye lasers, CW and pulsed diode lasers, and other solid state lasers have been used as injection seeders. By optimizing the fundamental laser parameters of pump energy, Q-switched pulse build-up time, injection seed power and mode matching, one can achieve significant improvements in the spectral purity of the Q-switched output. These parameters are incorporated into a simple model for analyzing spectral purity and pulse build-up processes in a Q-switched, injection-seeded laser. Experiments to optimize the relevant parameters of an alexandrite laser show good agreement.

  6. Quality engineering tools focused on high power LED driver design using boost power stages in switch mode

    NASA Astrophysics Data System (ADS)

    Ileana, Ioan; Risteiu, Mircea; Marc, Gheorghe

    2016-12-01

    This paper is a part of our research dedicated to high power LED lamps designing. The boost-up selected technology wants to meet driver producers' tendency in the frame of efficiency and disturbances constrains. In our work we used modeling and simulation tools for implementing scenarios of the driver work when some controlling functions are executed (output voltage/ current versus input voltage and fixed switching frequency, input and output electric power transfer versus switching frequency, transient inductor voltage analysis, and transient out capacitor analysis). Some electrical and thermal stress conditions are also analyzed. Based on these aspects, a high reliable power LED driver has been designed.

  7. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    NASA Astrophysics Data System (ADS)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  8. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    NASA Astrophysics Data System (ADS)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.

  9. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  10. Theory of fiber-optic, evanescent-wave spectroscopy and sensors

    NASA Astrophysics Data System (ADS)

    Messica, A.; Greenstein, A.; Katzir, A.

    1996-05-01

    A general theory for fiber-optic, evanescent-wave spectroscopy and sensors is presented for straight, uncladded, step-index, multimode fibers. A three-dimensional model is formulated within the framework of geometric optics. The model includes various launching conditions, input and output end-face Fresnel transmission losses, multiple Fresnel reflections, bulk absorption, and evanescent-wave absorption. An evanescent-wave sensor response is analyzed as a function of externally controlled parameters such as coupling angle, f number, fiber length, and diameter. Conclusions are drawn for several experimental apparatuses.

  11. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  12. On dynamics in a Keynesian model of monetary stabilization policy with debt effect

    NASA Astrophysics Data System (ADS)

    Asada, Toichiro; Demetrian, Michal; Zimka, Rudolf

    2018-05-01

    In this paper, a four-dimensional model of flexible prices with the central bank's stabilization policy, describing the development of the firms' private debt, the output, the expected rate of inflation and the rate of interest is analyzed. Questions concerning the existence of limit cycles around its normal equilibrium point are investigated. The bifurcation equation is found. The formulae for the calculation of its coefficients are gained. A numerical example is presented by means of numerical simulations.

  13. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less

  14. Asia-MIP: Multi Model-data Synthesis of Terrestrial Carbon Cycles in Asia

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Ito, A.; Kang, M.; Sasai, T.; SATO, H.; Ueyama, M.; Kobayashi, H.; Saigusa, N.; Kim, J.

    2013-12-01

    Asia, which is characterized by monsoon climate and intense human activities, is one of the prominent understudied regions in terms of terrestrial carbon budgets and mechanisms of carbon exchange. To better understand terrestrial carbon cycle in Asia, we initiated multi-model and data intercomparison project in Asia (Asia-MIP). We analyzed outputs from multiple approaches: satellite-based observations (AVHRR and MODIS) and related products, empirically upscaled estimations (Support Vector Regression) using eddy-covariance observation network in Asia (AsiaFlux, CarboEastAsia, FLUXNET), ~10 terrestrial biosphere models (e.g. BEAMS, Biome-BGC, LPJ, SEIB-DGVM, TRIFFID, VISIT models), and atmospheric inversion analysis (e.g. TransCom models). We focused on the two difference temporal coverage: long-term (30 years; 1982-2011) and decadal (10 years; 2001-2010; data intensive period) scales. The regions of covering Siberia, Far East Asia, East Asia, Southeast Asia and South Asia (60-80E, 10S-80N), was analyzed in this study for assessing the magnitudes, interannual variability, and key driving factors of carbon cycles. We will report the progress of synthesis effort to quantify terrestrial carbon budget in Asia. First, we analyzed the recent trends in Gross Primary Productivities (GPP) using satellite-based observation (AVHRR) and multiple terrestrial biosphere models. We found both model outputs and satellite-based observation consistently show an increasing trend in GPP in most of the regions in Asia. Mechanisms of the GPP increase were analyzed using models, and changes in temperature and precipitation play dominant roles in GPP increase in boreal and temperate regions, whereas changes in atmospheric CO2 and precipitation are important in tropical regions. However, their relative contributions were different. Second, in the decadal analysis (2001-2010), we found that the negative GPP and carbon uptake anomalies in 2003 summer in Far East Asia is one of the largest anomalies with high consistency among methods from 2001 to 2010 period. The model analysis showed that these anomalies were produced by different climate factors among the models. Therefore, we conclude that inconsistency of model sensitivity to meteorological anomalies is an important issue to be improved in future. Acknowledgement The study is financially supported by the Environment Research and Technology Development Fund (RFa-1201) of the Ministry of the Environment of Japan and JSPS KAKENHI Grant Number 25281003.

  15. Elemental analysis using temporal gating of a pulsed neutron generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitra, Sudeep

    Technologies related to determining elemental composition of a sample that comprises fissile material are described herein. In a general embodiment, a pulsed neutron generator periodically emits bursts of neutrons, and is synchronized with an analyzer circuit. The bursts of neutrons are used to interrogate the sample, and the sample outputs gamma rays based upon the neutrons impacting the sample. A detector outputs pulses based upon the gamma rays impinging upon the material of the detector, and the analyzer circuit assigns the pulses to temporally-based bins based upon the analyzer circuit being synchronized with the pulsed neutron generator. A computing devicemore » outputs data that is indicative of elemental composition of the sample based upon the binned pulses.« less

  16. Methodologies for optimal resource allocation to the national space program and new space utilizations. Volume 1: Technical description

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.

  17. Asymmetry in Signal Oscillations Contributes to Efficiency of Periodic Systems.

    PubMed

    Bae, Seul-A; Acevedo, Alison; Androulakis, Ioannis P

    2016-01-01

    Oscillations are an important feature of cellular signaling that result from complex combinations of positive- and negative-feedback loops. The encoding and decoding mechanisms of oscillations based on amplitude and frequency have been extensively discussed in the literature in the context of intercellular and intracellular signaling. However, the fundamental questions of whether and how oscillatory signals offer any competitive advantages-and, if so, what-have not been fully answered. We investigated established oscillatory mechanisms and designed a study to analyze the oscillatory characteristics of signaling molecules and system output in an effort to answer these questions. Two classic oscillators, Goodwin and PER, were selected as the model systems, and corresponding no-feedback models were created for each oscillator to discover the advantage of oscillating signals. Through simulating the original oscillators and the matching no-feedback models, we show that oscillating systems have the capability to achieve better resource-to-output efficiency, and we identify oscillatory characteristics that lead to improved efficiency.

  18. Channel characterization and empirical model for ergodic capacity of free-space optical communication link

    NASA Astrophysics Data System (ADS)

    Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António

    2017-05-01

    In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).

  19. The effect of output-input isolation on the scaling and energy consumption of all-spin logic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jiaxi; Haratipour, Nazila; Koester, Steven J., E-mail: skoester@umn.edu

    All-spin logic (ASL) is a novel approach for digital logic applications wherein spin is used as the state variable instead of charge. One of the challenges in realizing a practical ASL system is the need to ensure non-reciprocity, meaning the information flows from input to output, not vice versa. One approach described previously, is to introduce an asymmetric ground contact, and while this approach was shown to be effective, it remains unclear as to the optimal approach for achieving non-reciprocity in ASL. In this study, we quantitatively analyze techniques to achieve non-reciprocity in ASL devices, and we specifically compare themore » effect of using asymmetric ground position and dipole-coupled output/input isolation. For this analysis, we simulate the switching dynamics of multiple-stage logic devices with FePt and FePd perpendicular magnetic anisotropy materials using a combination of a matrix-based spin circuit model coupled to the Landau–Lifshitz–Gilbert equation. The dipole field is included in this model and can act as both a desirable means of coupling magnets and a source of noise. The dynamic energy consumption has been calculated for these schemes, as a function of input/output magnet separation, and the results show that using a scheme that electrically isolates logic stages produces superior non-reciprocity, thus allowing both improved scaling and reduced energy consumption.« less

  20. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    PubMed

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  1. Input-Output Modeling for Urban Energy Consumption in Beijing: Dynamics and Comparison

    PubMed Central

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making. PMID:24595199

  2. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  3. Comparison of P&O and INC Methods in Maximum Power Point Tracker for PV Systems

    NASA Astrophysics Data System (ADS)

    Chen, Hesheng; Cui, Yuanhui; Zhao, Yue; Wang, Zhisen

    2018-03-01

    In the context of renewable energy, the maximum power point tracker (MPPT) is often used to increase the solar power efficiency, taking into account the randomness and volatility of solar energy due to changes in temperature and photovoltaic. In all MPPT techniques, perturb & observe and incremental conductance are widely used in MPPT controllers, because of their simplicity and ease of operation. According to the internal structure of the photovoltaic cell and the output volt-ampere characteristic, this paper established the circuit model and establishes the dynamic simulation model in Matlab/Simulink with the preparation of the s function. The perturb & observe MPPT method and the incremental conductance MPPT method were analyzed and compared by the theoretical analysis and digital simulation. The simulation results have shown that the system with INC MPPT method has better dynamic performance and improves the output power of photovoltaic power generation.

  4. A Global Repository for Planet-Sized Experiments and Observations

    NASA Technical Reports Server (NTRS)

    Williams, Dean; Balaji, V.; Cinquini, Luca; Denvil, Sebastien; Duffy, Daniel; Evans, Ben; Ferraro, Robert D.; Hansen, Rose; Lautenschlager, Michael; Trenham, Claire

    2016-01-01

    Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) allows users to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP) output used by the Intergovernmental Panel on Climate Change assessment reports. Data served by ESGF not only include model output (i.e., CMIP simulation runs) but also include observational data from satellites and instruments, reanalyses, and generated images. Metadata summarize basic information about the data for fast and easy data discovery.

  5. Analysis of information systems for hydropower operations

    NASA Technical Reports Server (NTRS)

    Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W. W. G.

    1976-01-01

    The operations of hydropower systems were analyzed with emphasis on water resource management, to determine how aerospace derived information system technologies can increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept outlined.

  6. A parametric investigation on a solar dish-Stirling system

    NASA Astrophysics Data System (ADS)

    Gholamalizadeh, Ehsan; Chung, Jae Dong

    2018-06-01

    The aim of this study is to analyze the performance of a solar dish-Stirling system. A mathematical model for the overall thermal efficiency of the solar-powered high-temperature-differential dish-Stirling engine is described. This model takes into account pressure losses due to fluid friction which is internal to the engine, mechanical friction between the moving parts, actual heat transfer includes the effects of both internal and external irreversibilities of the cycle and finite regeneration processes time. Validation was done through comparison with the actual power output of the "EuroDish" system. Moreover, the effects of dish diameter and working fluid on the performance of the system were studied. An increase of about 7.2% was observed for the power output using hydrogen as the working fluid rather than helium. Also, the focal distance for any diameter of dish was calculated.

  7. Evaluation of the performance of a passive-active vibration isolation system

    NASA Astrophysics Data System (ADS)

    Sun, L. L.; Hansen, C. H.; Doolan, C.

    2015-01-01

    The behavior of a feedforward active isolation system subjected to actuator output constraints is investigated. Distributed parameter models are developed to analyze the system response, and to produce a transfer matrix for the design of an integrated passive-active isolation system. Cost functions considered here comprise a combination of the vibration transmission energy and the sum of the squared control forces. The example system considered is a rigid body connected to a simply supported plate via two isolation mounts. The overall isolation performance is evaluated by numerical simulation. The results show that the control strategies which rely on unconstrained actuator outputs may give substantial power transmission reductions over a wide frequency range, but also require large control force amplitudes to control excited vibration modes of the system. Expected power transmission reductions for modified control strategies that incorporate constrained actuator outputs are considerably less than typical reductions with unconstrained actuator outputs. The active system with constrained control force outputs is shown to be more effective at the resonance frequencies of the supporting plate. However, in the frequency range in which rigid body modes are present, the control strategies employed using constrained actuator outputs can only achieve 5-10 dB power transmission reduction, while at off-resonance frequencies, little or no power transmission reduction can be obtained with realistic control forces. Analysis of the wave effects in the passive mounts is also presented.

  8. Mid-Piacensian mean annual sea surface temperature: an analysis for data-model comparisons

    USGS Publications Warehouse

    Dowsett, Harry J.; Robinson, Marci M.; Foley, Kevin M.; Stoll, Danielle K.

    2010-01-01

    Numerical models of the global climate system are the primary tools used to understand and project climate disruptions in the form of future global warming. The Pliocene has been identified as the closest, albeit imperfect, analog to climate conditions expected for the end of this century, making an independent data set of Pliocene conditions necessary for ground truthing model results. Because most climate model output is produced in the form ofmean annual conditions, we present a derivative of the USGS PRISM3 Global Climate Reconstruction which integrates multiple proxies of sea surface temperature (SST) into single surface temperature anomalies. We analyze temperature estimates from faunal and floral assemblage data,Mg/Ca values and alkenone unsaturation indices to arrive at a single mean annual SST anomaly (Pliocene minus modern) best describing each PRISM site, understanding that multiple proxies should not necessarily show concordance. The power of themultiple proxy approach lies within its diversity, as no two proxies measure the same environmental variable. This data set can be used to verify climate model output, to serve as a starting point for model inter-comparisons, and for quantifying uncertainty in Pliocene model prediction in perturbed physics ensembles.

  9. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  10. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  11. Semen molecular and cellular features: these parameters can reliably predict subsequent ART outcome in a goat model

    PubMed Central

    Berlinguer, Fiammetta; Madeddu, Manuela; Pasciu, Valeria; Succu, Sara; Spezzigu, Antonio; Satta, Valentina; Mereu, Paolo; Leoni, Giovanni G; Naitana, Salvatore

    2009-01-01

    Currently, the assessment of sperm function in a raw or processed semen sample is not able to reliably predict sperm ability to withstand freezing and thawing procedures and in vivo fertility and/or assisted reproductive biotechnologies (ART) outcome. The aim of the present study was to investigate which parameters among a battery of analyses could predict subsequent spermatozoa in vitro fertilization ability and hence blastocyst output in a goat model. Ejaculates were obtained by artificial vagina from 3 adult goats (Capra hircus) aged 2 years (A, B and C). In order to assess the predictive value of viability, computer assisted sperm analyzer (CASA) motility parameters and ATP intracellular concentration before and after thawing and of DNA integrity after thawing on subsequent embryo output after an in vitro fertility test, a logistic regression analysis was used. Individual differences in semen parameters were evident for semen viability after thawing and DNA integrity. Results of IVF test showed that spermatozoa collected from A and B lead to higher cleavage rates (0 < 0.01) and blastocysts output (p < 0.05) compared with C. Logistic regression analysis model explained a deviance of 72% (p < 0.0001), directly related with the mean percentage of rapid spermatozoa in fresh semen (p < 0.01), semen viability after thawing (p < 0.01), and with two of the three comet parameters considered, i.e tail DNA percentage and comet length (p < 0.0001). DNA integrity alone had a high predictive value on IVF outcome with frozen/thawed semen (deviance explained: 57%). The model proposed here represents one of the many possible ways to explain differences found in embryo output following IVF with different semen donors and may represent a useful tool to select the most suitable donors for semen cryopreservation. PMID:19900288

  12. Chemistry of Volatile Organic Compounds in the Los Angeles Basin: Formation of Oxygenated Compounds and Determination of Emission Ratios

    NASA Astrophysics Data System (ADS)

    de Gouw, J. A.; Gilman, J. B.; Kim, S.-W.; Alvarez, S. L.; Dusanter, S.; Graus, M.; Griffith, S. M.; Isaacman-VanWertz, G.; Kuster, W. C.; Lefer, B. L.; Lerner, B. M.; McDonald, B. C.; Rappenglück, B.; Roberts, J. M.; Stevens, P. S.; Stutz, J.; Thalman, R.; Veres, P. R.; Volkamer, R.; Warneke, C.; Washenfelder, R. A.; Young, C. J.

    2018-02-01

    We analyze an expanded data set of oxygenated volatile organic compounds (OVOCs) in air measured by several instruments at a surface site in Pasadena near Los Angeles during the National Oceanic and Atmospheric Administration California Nexus study in 2010. The contributions of emissions, chemical formation, and removal are quantified for each OVOC using CO as a tracer of emissions and the OH exposure of the sampled air masses calculated from hydrocarbon ratios. The method for separating emissions from chemical formation is evaluated using output for Pasadena from the Weather Research and Forecasting-Chemistry model. The model is analyzed by the same method as the measurement data, and the emission ratios versus CO calculated from the model output agree for ketones with the inventory used in the model but overestimate aldehydes by 70%. In contrast with the measurements, nighttime formation of OVOCs is significant in the model and is attributed to overestimated precursor emissions and overestimated rate coefficients for the reactions of the precursors with ozone and NO3. Most measured aldehydes correlated strongly with CO at night, suggesting a contribution from motor vehicle emissions. However, the emission ratios of most aldehydes versus CO are higher than those reported in motor vehicle emissions and the aldehyde sources remain unclear. Formation of several OVOCs is investigated in terms of the removal of specific precursors. Direct emissions of alcohols and aldehydes contribute significantly to OH reactivity throughout the day, and these emissions should be accurately represented in models describing ozone formation.

  13. An effective drift correction for dynamical downscaling of decadal global climate predictions

    NASA Astrophysics Data System (ADS)

    Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen

    2018-04-01

    Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.

  14. Measuring Efficiency of Knowledge Production in Health Research Centers Using Data Envelopment Analysis (DEA): A Case Study in Iran

    PubMed Central

    Amiri, Mohammad Meskarpour; Nasiri, Taha; Saadat, Seyed Hassan; Anabad, Hosein Amini; Ardakan, Payman Mahboobi

    2016-01-01

    Introduction Efficiency analysis is necessary in order to avoid waste of materials, energy, effort, money, and time during scientific research. Therefore, analyzing efficiency of knowledge production in health areas is necessary, especially for developing and in-transition countries. As the first step in this field, the aim of this study was the analysis of selected health research center efficiency using data envelopment analysis (DEA). Methods This retrospective and applied study was conducted in 2015 using input and output data of 16 health research centers affiliated with a health sciences university in Iran during 2010–2014. The technical efficiency of health research centers was evaluated based on three basic data envelopment analysis (DEA) models: input-oriented, output-oriented, and hyperbolic-oriented. The input and output data of each health research center for years 2010–2014 were collected from the Iran Ministry of Health and Medical Education (MOHE) profile and analyzed by R software. Results The mean efficiency score in input-oriented, output-oriented, and hyperbolic-oriented models was 0.781, 0.671, and 0.798, respectively. Based on results of the study, half of the health research centers are operating below full efficiency, and about one-third of them are operating under the average efficiency level. There is also a large gap between health research center efficiency relative to each other. Conclusion It is necessary for health research centers to improve their efficiency in knowledge production through better management of available resources. The higher level of efficiency in a significant number of health research centers is achievable through more efficient management of human resources and capital. Further research is needed to measure and follow the efficiency of knowledge production by health research centers around the world and over a period of time. PMID:28344756

  15. Modeling and control of non-square MIMO system using relay feedback.

    PubMed

    Kalpana, D; Thyagarajan, T; Gokulraj, N

    2015-11-01

    This paper proposes a systematic approach for the modeling and control of non-square MIMO systems in time domain using relay feedback. Conventionally, modeling, selection of the control configuration and controller design of non-square MIMO systems are performed using input/output information of direct loop, while the output of undesired responses that bears valuable information on interaction among the loops are not considered. However, in this paper, the undesired response obtained from relay feedback test is also taken into consideration to extract the information about the interaction between the loops. The studies are performed on an Air Path Scheme of Turbocharged Diesel Engine (APSTDE) model, which is a typical non-square MIMO system, with input and output variables being 3 and 2 respectively. From the relay test response, the generalized analytical expressions are derived and these analytical expressions are used to estimate unknown system parameters and also to evaluate interaction measures. The interaction is analyzed by using Block Relative Gain (BRG) method. The model thus identified is later used to design appropriate controller to carry out closed loop studies. Closed loop simulation studies were performed for both servo and regulatory operations. Integral of Squared Error (ISE) performance criterion is employed to quantitatively evaluate performance of the proposed scheme. The usefulness of the proposed method is demonstrated on a lab-scale Two-Tank Cylindrical Interacting System (TTCIS), which is configured as a non-square system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Assessing economic impacts to coastal recreation and tourism from oil and gas development in the Oregon and Washington Outer Continental Shelf. Inventory and evaluation of Washington and Oregon coastal recreation resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, G.M.; Johnson, N.S.; Chapman, D.

    The purpose of the three-part study was to assist Materials Management Service (MMS) planners in evaluation of the anticipated social impact of proposed oil and gas development on the environment. The purpose of the report is primarily to analyze the econometric models of the Dornbusch study. The authors examine, in detail, key aspects of the gravity, consumer surplus, and economic effects (input-output) models. The purpose is two-fold. First, the authors evaluate the performance of the model in satisfying the objective for which it was developed: analyzing economic impacts of OCS oil and gas development in California. Second, the authors evaluatemore » the applicability of the modeling approach employed in the Dornbusch study for analyzing potential OCS development impacts in Washington and Oregon. At the end of the report, the authors offer suggestions for any future study of economic impacts of OCS development in Washington and Oregon. The recommendations concern future data gathering procedures and alternative modeling approaches for measuring economic impacts.« less

  17. General Circulation Model Output for Forest Climate Change Research and Applications

    Treesearch

    Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi

    1993-01-01

    This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.

  18. A wideband, high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Quirk, M. P.; Wilck, H. C.; Garyantes, M. F.; Grimm, M. J.

    1988-01-01

    A two-million-channel, 40 MHz bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory is described. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2 (sup 21) point Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple signal detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis.

  19. A wide-band high-resolution spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Quirk, Maureen P.; Garyantes, Michael F.; Wilck, Helmut C.; Grimm, Michael J.

    1988-01-01

    A two-million-channel, 40 MHz bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory is described. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2 (sup 21) point Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis.

  20. A wide-band high-resolution spectrum analyzer.

    PubMed

    Quirk, M P; Garyantes, M F; Wilck, H C; Grimm, M J

    1988-12-01

    This paper describes a two-million-channel 40-MHz-bandwidth, digital spectrum analyzer under development at the Jet Propulsion Laboratory. The analyzer system will serve as a prototype processor for the sky survey portion of NASA's Search for Extraterrestrial Intelligence program and for other applications in the Deep Space Network. The analyzer digitizes an analog input, performs a 2(21)-point, Discrete Fourier Transform, accumulates the output power, normalizes the output to remove frequency-dependent gain, and automates simple signal detection algorithms. Due to its built-in frequency-domain processing functions and configuration flexibility, the analyzer is a very powerful tool for real-time signal analysis and detection.

  1. Timber net value and physical output changes following wildfire in the northern Rocky Mountains: estimates for specific fire situations

    Treesearch

    Patrick J. Flowers; Patricia B. Shinkle; Daria A. Cain; Thomas J. Mills

    1985-01-01

    In the last decade, the fire management program of the Forest Service, U.S. Department of Agriculture, has come under closer scrutiny because of ever-rising program costs. The Forest Service has responded by conducting several studies analyzing the economic efficiency of its fire management program. Some components of the analytical models have been difficult to...

  2. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  3. Moving-window dynamic optimization: design of stimulation profiles for walking.

    PubMed

    Dosen, Strahinja; Popović, Dejan B

    2009-05-01

    The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.

  4. Poster — Thur Eve — 43: Monte Carlo Modeling of Flattening Filter Free Beams and Studies of Relative Output Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.

    2014-08-15

    Flattening filter free (FFF) beams have been adopted by many clinics and used for patient treatment. However, compared to the traditional flattened beams, we have limited knowledge of FFF beams. In this study, we successfully modeled the 6 MV FFF beam for Varian TrueBeam accelerator with the Monte Carlo (MC) method. Both the percentage depth dose and profiles match well to the Golden Beam Data (GBD) from Varian. MC simulations were then performed to predict the relative output factors. The in-water output ratio, Scp, was simulated in water phantom and data obtained agrees well with GBD. The in-air output ratio,more » Sc, was obtained by analyzing the phase space placed at isocenter, in air, and computing the ratio of water Kerma rates for different field sizes. The phantom scattering factor, Sp, can then be obtained from the traditional way of taking the ratio of Scp and Sc. We also simulated Sp using a recently proposed method based on only the primary beam dose delivery in water phantom. Because there is no concern of lateral electronic disequilibrium, this method is more suitable for small fields. The results from both methods agree well with each other. The flattened 6 MV beam was simulated and compared to 6 MV FFF. The comparison confirms that 6 MV FFF has less scattering from the Linac head and less phantom scattering contribution to the central axis dose, which will be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems.« less

  5. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  6. A numerical model on thermodynamic analysis of free piston Stirling engines

    NASA Astrophysics Data System (ADS)

    Mou, Jian; Hong, Guotong

    2017-02-01

    In this paper, a new numerical thermodynamic model which bases on the energy conservation law has been used to analyze the free piston Stirling engine. In the model all data was taken from a real free piston Stirling engine which has been built in our laboratory. The energy conservation equations have been applied to expansion space and compression space of the engine. The equation includes internal energy, input power, output power, enthalpy and the heat losses. The heat losses include regenerative heat conduction loss, shuttle heat loss, seal leakage loss and the cavity wall heat conduction loss. The numerical results show that the temperature of expansion space and the temperature of compression space vary with the time. The higher regeneration effectiveness, the higher efficiency and bigger output work. It is also found that under different initial pressures, the heat source temperature, phase angle and engine work frequency pose different effects on the engine’s efficiency and power. As a result, the model is expected to be a useful tool for simulation, design and optimization of Stirling engines.

  7. Methods for Quantitative Interpretation of Retarding Field Analyzer Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.

    2011-03-28

    Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and onemore » can obtain best fit values for important simulation parameters with a chi-square minimization method.« less

  8. Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.

    PubMed

    Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin

    2016-09-20

    Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.

  9. Influence of thermal deformation in cavity mirrors on beam propagation characteristics of high-power slab lasers

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Xiao, Longsheng; Wang, Wei; Wu, Chao; Tang, Xiahui

    2018-01-01

    Owing to their good diffusion cooling and low sensitivity to misalignment, slab-shape negative-branch unstable-waveguide resonators are widely used for high-power lasers in industry. As the output beam of the resonator is astigmatic, an external beam shaping system is required. However, the transverse dimension of the cavity mirrors in the resonator is large. For a long-time operation, the heating of cavity mirrors can be non-uniform. This results in micro-deformation and a change in the radius of curvature of the cavity mirrors, and leads to an output beam of an offset optical axis of the resonator. It was found that a change in the radius of curvature of 0.1% (1 mm) caused by thermal deformation generates a transverse displacement of 1.65 mm at the spatial filter of the external beam shaping system, and an output power loss of more than 80%. This can potentially burn out the spatial filter. In order to analyze the effect of the offset optical axis of the beam on the external optical path, we analyzed the transverse displacement and rotational misalignments of the spatial filter. For instance, if the transverse displacement was 0.3 mm, the loss in the output power was 9.6% and a sidelobe appeared in the unstable direction. If the angle of rotation was 5°, the loss in the output power was 2%, and the poles were in the direction of the waveguide. Based on these results, by adjusting the bending mirror, the deviation angle of the output beam of the resonator cavity was corrected, in order to obtain maximum output power and optimal beam quality. Finally, the propagation characteristics of the corrected output beam were analyzed.

  10. (abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, A. E.

    1994-01-01

    Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.

  11. Simple Spreadsheet Thermal Models for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Nash, Alfred

    1995-01-01

    Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.

  12. Risk factors for loss of residual renal function in children treated with chronic peritoneal dialysis.

    PubMed

    Ha, Il-Soo; Yap, Hui K; Munarriz, Reyner L; Zambrano, Pedro H; Flynn, Joseph T; Bilge, Ilmay; Szczepanska, Maria; Lai, Wai-Ming; Antonio, Zenaida L; Gulati, Ashima; Hooman, Nakysa; van Hoeck, Koen; Higuita, Lina M S; Verrina, Enrico; Klaus, Günter; Fischbach, Michel; Riyami, Mohammed A; Sahpazova, Emilja; Sander, Anja; Warady, Bradley A; Schaefer, Franz

    2015-09-01

    In dialyzed patients, preservation of residual renal function is associated with better survival, lower morbidity, and greater quality of life. To analyze the evolution of residual diuresis over time, we prospectively monitored urine output in 401 pediatric patients in the global IPPN registry who commenced peritoneal dialysis (PD) with significant residual renal function. Associations of patient characteristics and time-variant covariates with daily urine output and the risk of developing oligoanuria (under 100 ml/m(2)/day) were analyzed by mixed linear modeling and Cox regression analysis including time-varying covariates. With an average loss of daily urine volume of 130 ml/m(2) per year, median time to oligoanuria was 48 months. Residual diuresis significantly subsided more rapidly in children with glomerulopathies, lower diuresis at start of PD, high ultrafiltration volume, and icodextrin use. Administration of diuretics significantly reduced oligoanuria risk, whereas the prescription of renin-angiotensin system antagonists significantly increased the risk oligoanuria. Urine output on PD was significantly associated in a negative manner with glomerulopathies (-584 ml/m(2)) and marginally with the use of icodextrin (-179 ml/m(2)) but positively associated with the use of biocompatible PD fluid (+111 ml/m(2)). Children in both Asia and North America had consistently lower urine output compared with those in Europe perhaps due to regional variances in therapy. Thus, in children undergoing PD, residual renal function depends strongly on the cause of underlying kidney disease and may be modifiable by diuretic therapy, peritoneal ultrafiltration, and choice of PD fluid.

  13. Risk factors for loss of residual renal function in children treated with chronic peritoneal dialysis

    PubMed Central

    Ha, Il-Soo; Yap, Hui K; Munarriz, Reyner L; Zambrano, Pedro H; Flynn, Joseph T; Bilge, Ilmay; Szczepanska, Maria; Lai, Wai-Ming; Antonio, Zenaida L; Gulati, Ashima; Hooman, Nakysa; van Hoeck, Koen; Higuita, Lina M S; Verrina, Enrico; Klaus, Günter; Fischbach, Michel; Riyami, Mohammed A; Sahpazova, Emilja; Sander, Anja; Warady, Bradley A; Schaefer, Franz

    2015-01-01

    In dialyzed patients, preservation of residual renal function is associated with better survival, lower morbidity, and greater quality of life. To analyze the evolution of residual diuresis over time, we prospectively monitored urine output in 401 pediatric patients in the global IPPN registry who commenced peritoneal dialysis (PD) with significant residual renal function. Associations of patient characteristics and time-variant covariates with daily urine output and the risk of developing oligoanuria (under 100 ml/m2/day) were analyzed by mixed linear modeling and Cox regression analysis including time-varying covariates. With an average loss of daily urine volume of 130 ml/m2 per year, median time to oligoanuria was 48 months. Residual diuresis significantly subsided more rapidly in children with glomerulopathies, lower diuresis at start of PD, high ultrafiltration volume, and icodextrin use. Administration of diuretics significantly reduced oligoanuria risk, whereas the prescription of renin–angiotensin system antagonists significantly increased the risk oligoanuria. Urine output on PD was significantly associated in a negative manner with glomerulopathies (−584 ml/m2) and marginally with the use of icodextrin (−179 ml/m2) but positively associated with the use of biocompatible PD fluid (+111 ml/m2). Children in both Asia and North America had consistently lower urine output compared with those in Europe perhaps due to regional variances in therapy. Thus, in children undergoing PD, residual renal function depends strongly on the cause of underlying kidney disease and may be modifiable by diuretic therapy, peritoneal ultrafiltration, and choice of PD fluid. PMID:25874598

  14. Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)

    DOE Data Explorer

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.

    2015-01-01

    These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.

  15. Effectiveness of a passive-active vibration isolation system with actuator constraints

    NASA Astrophysics Data System (ADS)

    Sun, Lingling; Sun, Wei; Song, Kongjie; Hansen, Colin H.

    2014-05-01

    In the prediction of active vibration isolation performance, control force requirements were ignored in previous work. This may limit the realization of theoretically predicted isolation performance if control force of large magnitude cannot be supplied by actuators. The behavior of a feed-forward active isolation system subjected to actuator output constraints is investigated. Distributed parameter models are developed to analyze the system response, and to produce a transfer matrix for the design of an integrated passive-active isolation system. Cost functions comprising a combination of the vibration transmission energy and the sum of the squared control forces are proposed. The example system considered is a rigid body connected to a simply supported plate via two passive-active isolation mounts. Vertical and transverse forces as well as a rotational moment are applied at the rigid body, and resonances excited in elastic mounts and the supporting plate are analyzed. The overall isolation performance is evaluated by numerical simulation. The simulation results are then compared with those obtained using unconstrained control strategies. In addition, the effects of waves in elastic mounts are analyzed. It is shown that the control strategies which rely on unconstrained actuator outputs may give substantial power transmission reductions over a wide frequency range, but also require large control force amplitudes to control excited vibration modes of the system. Expected power transmission reductions for modified control strategies that incorporate constrained actuator outputs are considerably less than typical reductions with unconstrained actuator outputs. In the frequency range in which rigid body modes are present, the control strategies can only achieve 5-10 dB power transmission reduction, when control forces are constrained to be the same order of the magnitude as the primary vertical force. The resonances of the elastic mounts result in a notable increase of power transmission in high frequency range and cannot be attenuated by active control. The investigation provides a guideline for design and evaluation of active vibration isolation systems.

  16. Selective current collecting design for spring-type energy harvesters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dongjin; Roh, Hee Seok; Kim, Yeontae

    2015-01-01

    Here we present a high performance spring-type piezoelectric energy harvester that selectively collects current from the inner part of a spring shell. We analyzed themain reason behind the low efficiency of the initial design using finite element models and proposed a selective current collecting design that can considerably improve the electrical conversion efficiency of the energy harvester. We found that the newly designed energy harvester increases the output voltage by 8 times leading to an output power of 2.21 mW under an impulsive load of 2.18 N when compared with the conventional design. We envision that selective current collecting designmore » will be used in spring-based self-powered active sensors and energy scavenging devices.« less

  17. Study on the optimization allocation of wind-solar in power system based on multi-region production simulation

    NASA Astrophysics Data System (ADS)

    Xu, Zhicheng; Yuan, Bo; Zhang, Fuqiang

    2018-06-01

    In this paper, a power supply optimization model is proposed. The model takes the minimum fossil energy consumption as the target, considering the output characteristics of the conventional power supply and the renewable power supply. The optimal capacity ratio of wind-solar in the power supply under various constraints is calculated, and the interrelation between conventional power supply and renewable energy is analyzed in the system of high proportion renewable energy integration. Using the model, we can provide scientific guidance for the coordinated and orderly development of renewable energy and conventional power sources.

  18. Vibration analysis and experiment of giant magnetostrictive force sensor

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwen; Liu, Fang; Zhu, Xingqiao; Wang, Haibo; Xu, Jia

    2017-12-01

    In this paper, a kind of giant magnetostrictive force sensor is proposed, ans its magneto-mechanical coupled model is developed. The relationship between output voltage of giant magnetostrictive force sensor and input excitation force is obtained. The phenomena of accuracy aggravation in high frequency and delay of giant magnetostrictive sensor are explained. The experimental results show that the model can describe the actual response of giant magnetostrictive force sensor. The new model of giant magnetostrictive sensor has simple form and is easy to be analyzed in theory, which is helpful to be applied in measuring and control fields.

  19. Assessing Ecosystem Model Performance in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  20. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  1. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  2. Performance Analysis of the Automotive TEG with Respect to the Geometry of the Modules

    NASA Astrophysics Data System (ADS)

    Yu, C. G.; Zheng, S. J.; Deng, Y. D.; Su, C. Q.; Wang, Y. P.

    2017-05-01

    Recently there has been increasing interest in applying thermoelectric technology to recover waste heat in automotive exhaust gas. Due to the limited space in the vehicle, it's meaningful to improve the TEG (thermoelectric generator) performance by optimizing the module geometry. This paper analyzes the performance of bismuth telluride modules for two criteria (power density and power output per area), and researches the relationship between the performance and the geometry of the modules. A geometry factor is defined for the thermoelectric element to describe the module geometry, and a mathematical model is set up to study the effects of the module geometry on its performance. It has been found out that the optimal geometry factors for maximum output power, power density and power output per unit area are different, and the value of the optimal geometry factors will be affected by the volume of the thermoelectric material and the thermal input. The results can be referred to as the basis for optimizing the performance of the thermoelectric modules.

  3. CMOS output buffer wave shaper

    NASA Technical Reports Server (NTRS)

    Albertson, L.; Whitaker, S.; Merrell, R.

    1990-01-01

    As the switching speeds and densities of Digital CMOS integrated circuits continue to increase, output switching noise becomes more of a problem. A design technique which aids in the reduction of switching noise is reported. The output driver stage is analyzed through the use of an equivalent RLC circuit. The results of the analysis are used in the design of an output driver stage. A test circuit based on these techniques is being submitted to MOSIS for fabrication.

  4. ALEC (Aggregate Lifecycle Effectiveness and Cost): A Model for Analyzing the Cost-Effectiveness of Air Force Enlisted Personnel Policies. Volume 2. Documentation and User’s Guide.

    DTIC Science & Technology

    1987-08-01

    225686 458892 875261 29791 162090 40137 1791858 IL R -A I.11 i - - - - -- _ _ __ _ _ ...4" .. ... "In m ur ln nm m i -73- V1.OUTPUT MODULE This module...Reenlist bonus 0 29791 29791 Crossflow fees 17271 40137 22865 Total 1417261 1791858 374597 - 75 - Table 7.3 GUIDELINES FOR CHOOSING MANAGEMENT

  5. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    PubMed

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  6. A first approach to the distortion analysis of nonlinear analog circuits utilizing X-parameters

    NASA Astrophysics Data System (ADS)

    Weber, H.; Widemann, C.; Mathis, W.

    2013-07-01

    In this contribution a first approach to the distortion analysis of nonlinear 2-port-networks with X-parameters1 is presented. The X-parameters introduced by Verspecht and Root (2006) offer the possibility to describe nonlinear microwave 2-port-networks under large signal conditions. On the basis of X-parameter measurements with a nonlinear network analyzer (NVNA) behavioral models can be extracted for the networks. These models can be used to consider the nonlinear behavior during the design process of microwave circuits. The idea of the present work is to extract the behavioral models in order to describe the influence of interfering signals on the output behavior of the nonlinear circuits. Hereby, a simulator is used instead of a NVNA to extract the X-parameters. Assuming that the interfering signals are relatively small compared to the nominal input signal, the output signal can be described as a superposition of the effects of each input signal. In order to determine the functional correlation between the scattering variables, a polynomial dependency is assumed. The required datasets for the approximation of the describing functions are simulated by a directional coupler model in Cadence Design Framework. The polynomial coefficients are obtained by a least-square method. The resulting describing functions can be used to predict the system's behavior under certain conditions as well as the effects of the interfering signal on the output signal. 1 X-parameter is a registered trademark of Agilent Technologies, Inc.

  7. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  8. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  9. A heterogenous Cournot duopoly with delay dynamics: Hopf bifurcations and stability switching curves

    NASA Astrophysics Data System (ADS)

    Pecora, Nicolò; Sodini, Mauro

    2018-05-01

    This article considers a Cournot duopoly model in a continuous-time framework and analyze its dynamic behavior when the competitors are heterogeneous in determining their output decision. Specifically the model is expressed in the form of differential equations with discrete delays. The stability conditions of the unique Nash equilibrium of the system are determined and the emergence of Hopf bifurcations is shown. Applying some recent mathematical techniques (stability switching curves) and performing numerical simulations, the paper confirms how different time delays affect the stability of the economy.

  10. Modeling and Dynamic Analysis of Paralleled dc/dc Converters With Master-Slave Current Sharing Control

    NASA Technical Reports Server (NTRS)

    Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce

    1996-01-01

    A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.

  11. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  12. Development of an advanced system identification technique for comparing ADAMS analytical results with modal test data for a MICON 65/13 wind turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialasiewicz, J.T.

    1995-07-01

    This work uses the theory developed in NREL/TP--442-7110 to analyze simulated data from an ADAMS (Automated Dynamic Analysis of Mechanical Systems) model of the MICON 65/13 wind turbine. The Observer/Kalman Filter identification approach is expanded to use input-output time histories from ADAMS simulations or structural test data. A step by step outline is offered on how the tools developed in this research, can be used for validation of the ADAMS model.

  13. Integrated built-in-test false and missed alarms reduction based on forward infinite impulse response & recurrent finite impulse response dynamic neural networks

    NASA Astrophysics Data System (ADS)

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2017-11-01

    Built-in tests (BITs) are widely used in mechanical systems to perform state identification, whereas the BIT false and missed alarms cause trouble to the operators or beneficiaries to make correct judgments. Artificial neural networks (ANN) are previously used for false and missed alarms identification, which has the features such as self-organizing and self-study. However, these ANN models generally do not incorporate the temporal effect of the bottom-level threshold comparison outputs and the historical temporal features are not fully considered. To improve the situation, this paper proposes a new integrated BIT design methodology by incorporating a novel type of dynamic neural networks (DNN) model. The new DNN model is termed as Forward IIR & Recurrent FIR DNN (FIRF-DNN), where its component neurons, network structures, and input/output relationships are discussed. The condition monitoring false and missed alarms reduction implementation scheme based on FIRF-DNN model is also illustrated, which is composed of three stages including model training, false and missed alarms detection, and false and missed alarms suppression. Finally, the proposed methodology is demonstrated in the application study and the experimental results are analyzed.

  14. High-resolution Doppler model of the human gait

    NASA Astrophysics Data System (ADS)

    Geisheimer, Jonathan L.; Greneker, Eugene F., III; Marshall, William S.

    2002-07-01

    A high resolution Doppler model of the walking human was developed for analyzing the continuous wave (CW) radar gait signature. Data for twenty subjects were collected simultaneously using an infrared motion capture system along with a two channel 10.525 GHz CW radar. The motion capture system recorded three-dimensional coordinates of infrared markers placed on the body. These body marker coordinates were used as inputs to create the theoretical Doppler output using a model constructed in MATLAB. The outputs of the model are the simulated Doppler signals due to each of the major limbs and the thorax. An estimated radar cross section for each part of the body was assigned using the Lund & Browder chart of estimated body surface area. The resultant Doppler model was then compared with the actual recorded Doppler gait signature in the frequency domain using the spectrogram. Comparison of the two sets of data has revealed several identifiable biomechanical features in the radar gait signature due to leg and body motion. The result of the research shows that a wealth of information can be unlocked from the radar gait signature, which may be useful in security and biometric applications.

  15. Development of the Complex General Linear Model in the Fourier Domain: Application to fMRI Multiple Input-Output Evoked Responses for Single Subjects

    PubMed Central

    Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281

  16. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  17. Output power fluctuations due to different weights of macro particles used in particle-in-cell simulations of Cerenkov devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Rong; Li, Yongdong; Liu, Chunliang

    2016-07-15

    The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less

  18. Changing R&D models in research-based pharmaceutical companies.

    PubMed

    Schuhmacher, Alexander; Gassmann, Oliver; Hinder, Markus

    2016-04-27

    New drugs serving unmet medical needs are one of the key value drivers of research-based pharmaceutical companies. The efficiency of research and development (R&D), defined as the successful approval and launch of new medicines (output) in the rate of the monetary investments required for R&D (input), has declined since decades. We aimed to identify, analyze and describe the factors that impact the R&D efficiency. Based on publicly available information, we reviewed the R&D models of major research-based pharmaceutical companies and analyzed the key challenges and success factors of a sustainable R&D output. We calculated that the R&D efficiencies of major research-based pharmaceutical companies were in the range of USD 3.2-32.3 billion (2006-2014). As these numbers challenge the model of an innovation-driven pharmaceutical industry, we analyzed the concepts that companies are following to increase their R&D efficiencies: (A) Activities to reduce portfolio and project risk, (B) activities to reduce R&D costs, and (C) activities to increase the innovation potential. While category A comprises measures such as portfolio management and licensing, measures grouped in category B are outsourcing and risk-sharing in late-stage development. Companies made diverse steps to increase their innovation potential and open innovation, exemplified by open source, innovation centers, or crowdsourcing, plays a key role in doing so. In conclusion, research-based pharmaceutical companies need to be aware of the key factors, which impact the rate of innovation, R&D cost and probability of success. Depending on their company strategy and their R&D set-up they can opt for one of the following open innovators: knowledge creator, knowledge integrator or knowledge leverager.

  19. Updated Model of the Solar Energetic Proton Environment in Space

    NASA Astrophysics Data System (ADS)

    Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami

    2018-05-01

    The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).

  20. Spatial taxation effects on regional coal economic activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, C.W.; Labys, W.C.

    1982-01-01

    Taxation effects on resource production, consumption and prices are seldom evaluated especially in the field of spatial commodity modeling. The most commonly employed linear programming model has fixed-point estimated demands and capacity constraints; hence it makes taxation effects difficult to be modeled. The second type of resource allocation model, the interregional input-output models does not include a direct and explicit price mechanism. Therefore, it is not suitable for analyzing taxation effects. The third type or spatial commodity model has been econometric in nature. While such an approach has a good deal of flexibility in modeling political and non-economic variables, itmore » treats taxation (or tariff) effects loosely using only dummy variables, and, in many cases, must sacrifice the consistency criterion important for spatial commodity modeling. This leaves model builders only one legitimate choice for analyzing taxation effects: the quadratic programming model which explicitly allows the interplay of regional demand and supply relations via a continuous spatial price constructed by the authors related to the regional demand for and supply of coal from Appalachian markets.« less

  1. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  2. Analyses of power output of piezoelectric energy-harvesting devices directly connected to a load resistor using a coupled piezoelectric-circuit finite element method.

    PubMed

    Zhu, Meiling; Worthington, Emma; Njuguna, James

    2009-07-01

    This paper presents, for the first time, a coupled piezoelectric-circuit finite element model (CPC-FEM) to analyze the power output of a vibration-based piezoelectric energy-harvesting device (EHD) when it is connected to a load resistor. Special focus is given to the effect of the load resistor value on the vibrational amplitude of the piezoelectric EHD, and thus on the current, voltage, and power generated by the device, which are normally assumed to be independent of the load resistor value to reduce the complexity of modeling and simulation. The presented CPC-FEM uses a cantilever with a sandwich structure and a seismic mass attached to the tip to study the following characteristics of the EHD as a result of changing the load resistor value: 1) the electric outputs: the current through and voltage across the load resistor; 2) the power dissipated by the load resistor; 3) the displacement amplitude of the tip of the cantilever; and 4) the shift in the resonant frequency of the device. It is found that these characteristics of the EHD have a significant dependence on the load resistor value, rather than being independent of it as is assumed in most literature. The CPC-FEM is capable of predicting the generated output power of the EHD with different load resistor values while simultaneously calculating the effect of the load resistor value on the displacement amplitude of the tip of the cantilever. This makes the CPC-FEM invaluable for validating the performance of a designed EHD before it is fabricated and tested, thereby reducing the recurring costs associated with repeat fabrication and trials. In addition, the proposed CPC-FEM can also be used for producing an optimized design for maximum power output.

  3. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  4. Simulation modeling of high-throughput cryopreservation of aquatic germplasm: a case study of blue catfish sperm processing

    PubMed Central

    Hu, E; Liao, T. W.; Tiersch, T. R.

    2013-01-01

    Emerging commercial-level technology for aquatic sperm cryopreservation has not been modeled by computer simulation. Commercially available software (ARENA, Rockwell Automation, Inc. Milwaukee, WI) was applied to simulate high-throughput sperm cryopreservation of blue catfish (Ictalurus furcatus) based on existing processing capabilities. The goal was to develop a simulation model suitable for production planning and decision making. The objectives were to: 1) predict the maximum output for 8-hr workday; 2) analyze the bottlenecks within the process, and 3) estimate operational costs when run for daily maximum output. High-throughput cryopreservation was divided into six major steps modeled with time, resources and logic structures. The modeled production processed 18 fish and produced 1164 ± 33 (mean ± SD) 0.5-ml straws containing one billion cryopreserved sperm. Two such production lines could support all hybrid catfish production in the US and 15 such lines could support the entire channel catfish industry if it were to adopt artificial spawning techniques. Evaluations were made to improve efficiency, such as increasing scale, optimizing resources, and eliminating underutilized equipment. This model can serve as a template for other aquatic species and assist decision making in industrial application of aquatic germplasm in aquaculture, stock enhancement, conservation, and biomedical model fishes. PMID:25580079

  5. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  6. Passivity/Lyapunov based controller design for trajectory tracking of flexible joint manipulators

    NASA Technical Reports Server (NTRS)

    Sicard, Pierre; Wen, John T.; Lanari, Leonardo

    1992-01-01

    A passivity and Lyapunov based approach for the control design for the trajectory tracking problem of flexible joint robots is presented. The basic structure of the proposed controller is the sum of a model-based feedforward and a model-independent feedback. Feedforward selection and solution is analyzed for a general model for flexible joints, and for more specific and practical model structures. Passivity theory is used to design a motor state-based controller in order to input-output stabilize the error system formed by the feedforward. Observability conditions for asymptotic stability are stated and verified. In order to accommodate for modeling uncertainties and to allow for the implementation of a simplified feedforward compensation, the stability of the system is analyzed in presence of approximations in the feedforward by using a Lyapunov based robustness analysis. It is shown that under certain conditions, e.g., the desired trajectory is varying slowly enough, stability is maintained for various approximations of a canonical feedforward.

  7. A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators

    NASA Astrophysics Data System (ADS)

    Singh, Gaurav; Krishnan, Girish

    2017-06-01

    Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.

  8. Endoreversible quantum heat engines in the linear response regime.

    PubMed

    Wang, Honghui; He, Jizhou; Wang, Jianhui

    2017-07-01

    We analyze general models of quantum heat engines operating a cycle of two adiabatic and two isothermal processes. We use the quantum master equation for a system to describe heat transfer current during a thermodynamic process in contact with a heat reservoir, with no use of phenomenological thermal conduction. We apply the endoreversibility description to such engine models working in the linear response regime and derive expressions of the efficiency and the power. By analyzing the entropy production rate along a single cycle, we identify the thermodynamic flux and force that a linear relation connects. From maximizing the power output, we find that such heat engines satisfy the tight-coupling condition and the efficiency at maximum power agrees with the Curzon-Ahlborn efficiency known as the upper bound in the linear response regime.

  9. Solar UV-B irradiance and total ozone in Italy: Fluctuations and trends

    NASA Astrophysics Data System (ADS)

    Casale, G. R.; Meloni, D.; Miano, S.; Palmieri, S.; Siani, A. M.; Cappellani, F.

    2000-02-01

    Solar UV irradiance spectra (290-325 nm) together with daily total ozone column observations have been collected since 1992 by means of Brewer spectrophotometers at two Italian stations (Rome and Ispra). The available Brewer irradiance data, recorded around noon and at fixed solar zenith angles, together with the output of a radiative transfer model (the STAR model) are presented and analyzed. The Brewer irradiance measurements and total ozone fluctuations and anomalies are investigated, pointing out the correlation between the high-frequency O3 components and irradiance at 305 nm. In addition, the total ozone long time series of Arosa (170 km apart from Ispra) and Vigna di Valle (very close to Rome) are analyzed to illustrate evidence of temporal variations and a possible trend.

  10. Logic-Based Models for the Analysis of Cell Signaling Networks†

    PubMed Central

    2010-01-01

    Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868

  11. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE PAGES

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...

    2016-01-01

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  12. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  13. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  14. Design of a miniature wind turbine for powering wireless sensors

    NASA Astrophysics Data System (ADS)

    Xu, F. J.; Yuan, F. G.; Hu, J. Z.; Qiu, Y. P.

    2010-04-01

    In this paper, a miniature wind turbine (MWT) system composed of commercially available off-the-shelf components was designed and tested for harvesting energy from ambient airflow to power wireless sensors. To make MWT operate at very low air flow rates, a 7.6 cm thorgren plastic Propeller blade was adopted as the wind turbine blade. A sub watt brushless DC motor was used as generator. To predict the performance of the MWT, an equivalent circuit model was employed for analyzing the output power and the net efficiency of the MWT system. In theory, the maximum net efficiency 14.8% of the MWT system was predicted. Experimental output power of the MWT versus resistive loads ranging from 5 ohms to 500 ohms under wind speeds from 3 m/s to 4.5 m/s correlates well with those from the predicted model, which means that the equivalent circuit model provides a guideline for optimizing the performance of the MWT and can be used for fulfilling the design requirements by selecting specific components for powering wireless sensors.

  15. High temperature composite analyzer (HITCAN) user's manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Lackney, J. J.; Singhal, S. N.; Murthy, P. L. N.; Gotsis, P.

    1993-01-01

    This manual describes 'how-to-use' the computer code, HITCAN (HIgh Temperature Composite ANalyzer). HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. This code combines composite mechanics and laminate theory with an internal data base for material properties of the constituents (matrix, fiber and interphase). The thermo-mechanical properties of the constituents are considered to be nonlinearly dependent on several parameters including temperature, stress and stress rate. The computation procedure for the analysis of the composite structures uses the finite element method. HITCAN is written in FORTRAN 77 computer language and at present has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. This manual describes HlTCAN's capabilities and limitations followed by input/execution/output descriptions and example problems. The input is described in detail including (1) geometry modeling, (2) types of finite elements, (3) types of analysis, (4) material data, (5) types of loading, (6) boundary conditions, (7) output control, (8) program options, and (9) data bank.

  16. Visualization techniques for computer network defense

    NASA Astrophysics Data System (ADS)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  17. Robot trajectory tracking with self-tuning predicted control

    NASA Technical Reports Server (NTRS)

    Cui, Xianzhong; Shin, Kang G.

    1988-01-01

    A controller that combines self-tuning prediction and control is proposed for robot trajectory tracking. The controller has two feedback loops: one is used to minimize the prediction error, and the other is designed to make the system output track the set point input. Because the velocity and position along the desired trajectory are given and the future output of the system is predictable, a feedforward loop can be designed for robot trajectory tracking with self-tuning predicted control (STPC). Parameters are estimated online to account for the model uncertainty and the time-varying property of the system. The authors describe the principle of STPC, analyze the system performance, and discuss the simplification of the robot dynamic equations. To demonstrate its utility and power, the controller is simulated for a Stanford arm.

  18. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  19. Measured radiofrequency exposure during various mobile-phone use scenarios.

    PubMed

    Kelsh, Michael A; Shum, Mona; Sheppard, Asher R; McNeely, Mark; Kuster, Niels; Lau, Edmund; Weidling, Ryan; Fordyce, Tiffani; Kühn, Sven; Sulser, Christof

    2011-01-01

    Epidemiologic studies of mobile phone users have relied on self reporting or billing records to assess exposure. Herein, we report quantitative measurements of mobile-phone power output as a function of phone technology, environmental terrain, and handset design. Radiofrequency (RF) output data were collected using software-modified phones that recorded power control settings, coupled with a mobile system that recorded and analyzed RF fields measured in a phantom head placed in a vehicle. Data collected from three distinct routes (urban, suburban, and rural) were summarized as averages of peak levels and overall averages of RF power output, and were analyzed using analysis of variance methods. Technology was the strongest predictor of RF power output. The older analog technology produced the highest RF levels, whereas CDMA had the lowest, with GSM and TDMA showing similar intermediate levels. We observed generally higher RF power output in rural areas. There was good correlation between average power control settings in the software-modified phones and power measurements in the phantoms. Our findings suggest that phone technology, and to a lesser extent, degree of urbanization, are the two stronger influences on RF power output. Software-modified phones should be useful for improving epidemiologic exposure assessment.

  20. Sensitivity analysis of Repast computational ecology models with R/Repast.

    PubMed

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  1. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  2. Integrated Flood Forecast and Virtual Dam Operation System for Water Resources and Flood Risk Management

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Ikoma, Eiji; Lawford, Peter; Oyanagi, Misa; Kanauchi, Shizu; Koudelova, Petra; Kitsuregawa, Masaru; Koike, Toshio

    2014-05-01

    While availability of hydrological- and hydrometeorological data shows growing tendency and advanced modeling techniques are emerging, such newly available data and advanced models may not always be applied in the field of decision-making. In this study we present an integrated system of ensemble streamflow forecast (ESP) and virtual dam simulator, which is designed to support river and dam manager's decision making. The system consists of three main functions: real time hydrological model, ESP model, and dam simulator model. In the real time model, the system simulates current condition of river basins, such as soil moisture and river discharges, using LSM coupled distributed hydrological model. The ESP model takes initial condition from the real time model's output and generates ESP, based on numerical weather prediction. The dam simulator model provides virtual dam operation and users can experience impact of dam control on remaining reservoir volume and downstream flood under the anticipated flood forecast. Thus the river and dam managers shall be able to evaluate benefit of priori dam release and flood risk reduction at the same time, on real time basis. Furthermore the system has been developed under the concept of data and models integration, and it is coupled with Data Integration and Analysis System (DIAS) - a Japanese national project for integrating and analyzing massive amount of observational and model data. Therefore it has advantage in direct use of miscellaneous data from point/radar-derived observation, numerical weather prediction output, to satellite imagery stored in data archive. Output of the system is accessible over the web interface, making information available with relative ease, e.g. from ordinary PC to mobile devices. We have been applying the system to the Upper Tone region, located northwest from Tokyo metropolitan area, and we show application example of the system in recent flood events caused by typhoons.

  3. Timeline Analysis Program (TLA-1)

    NASA Technical Reports Server (NTRS)

    Miller, K. H.

    1976-01-01

    The Timeline Analysis Program (TLA-1) was described. This program is a crew workload analysis computer program that was developed and expanded from previous workload analysis programs, and is designed to be used on the NASA terminal controlled vehicle program. The following information is described: derivation of the input data, processing of the data, and form of the output data. Eight scenarios that were created, programmed, and analyzed as verification of this model were also described.

  4. Modeling of four-terminal solar photovoltaic systems for field application

    NASA Astrophysics Data System (ADS)

    Vahanka, Harikrushna; Purohit, Zeel; Tripathi, Brijesh

    2018-05-01

    In this article a theoretical framework for mechanically stacked four-terminal solar photovoltaic (FTSPV) system has been proposed. In a mechanical stack arrangement, a semitransparent CdTe panel has been used as a top sub-module, whereas a μc-Si solar panel has been used as bottom sub-module. Theoretical modeling has been done to analyze the physical processes in the system and to estimate reliable prediction of the performance. To incorporate the effect of material, the band gap and the absorption coefficient data for CdTe and μc-Si panels have been considered. The electrical performance of the top and bottom panels operated in a mechanical stack has been obtained experimentally for various inter-panel separations in the range of 0-3 cm. Maximum output power density has been obtained for a separation of 0.75 cm. The mean value of output power density from CdTe (top panel) has been calculated as 32.3 Wm-2 and the mean value of output power density from μc-Si, the bottom panel of four-terminal photovoltaic system has been calculated as ˜3.5 Wm-2. Results reported in this study reveal the potential of mechanically stacked four-terminal tandem solar photovoltaic system towards an energy-efficient configuration.

  5. BMDExpress Data Viewer: A Visualization Tool to Analyze ...

    EPA Pesticide Factsheets

    Regulatory agencies increasingly apply benchmark dose (BMD) modeling to determine points of departure in human risk assessments. BMDExpress applies BMD modeling to transcriptomics datasets and groups genes to biological processes and pathways for rapid assessment of doses at which biological perturbations occur. However, graphing and analytical capabilities within BMDExpress are limited, and the analysis of output files is challenging. We developed a web-based application, BMDExpress Data Viewer, for visualization and graphical analyses of BMDExpress output files. The software application consists of two main components: ‘Summary Visualization Tools’ and ‘Dataset Exploratory Tools’. We demonstrate through two case studies that the ‘Summary Visualization Tools’ can be used to examine and assess the distributions of probe and pathway BMD outputs, as well as derive a potential regulatory BMD through the modes or means of the distributions. The ‘Functional Enrichment Analysis’ tool presents biological processes in a two-dimensional bubble chart view. By applying filters of pathway enrichment p-value and minimum number of significant genes, we showed that the Functional Enrichment Analysis tool can be applied to select pathways that are potentially sensitive to chemical perturbations. The ‘Multiple Dataset Comparison’ tool enables comparison of BMDs across multiple experiments (e.g., across time points, tissues, or organisms, etc.). The ‘BMDL-BM

  6. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  7. Multivariable control of vapor compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X.D.; Liu, S.; Asada, H.H.

    1999-07-01

    This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less

  8. MULTICHANNEL ANALYZER

    DOEpatents

    Kelley, G.G.

    1959-11-10

    A multichannel pulse analyzer having several window amplifiers, each amplifier serving one group of channels, with a single fast pulse-lengthener and a single novel interrogation circuit serving all channels is described. A pulse followed too closely timewise by another pulse is disregarded by the interrogation circuit to prevent errors due to pulse pileup. The window amplifiers are connected to the pulse lengthener output, rather than the linear amplifier output, so need not have the fast response characteristic formerly required.

  9. Solid wastes integrated management in Rio de Janeiro: input-output analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pimenteira, C.A.P.; Carpio, L.G.T.; Rosa, L.P.

    2005-07-01

    This paper analyzes the socioeconomic aspects of solid waste management in Rio de Janeiro. An 'input-output' methodology was used to examine how the secondary product resulting from recycling is re-introduced into the productive process. A comparative profile was developed from the state of recycling and the various other aspects of solid waste management, both from the perspective of its economic feasibility and from the social aspects involved. This was done analyzing the greenhouse gas emissions and the decreased energy consumption. The effects of re-introducing recycled raw materials into the matrix and the ensuing reduction of the demand for virgin rawmore » materials was based on the input-output matrix for the State of Rio de Janeiro. This paper also analyzes the energy savings obtained from recycling and measures the avoided emissions of greenhouse gases.« less

  10. Climate Science's Globally Distributed Infrastructure

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  11. Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns.

    PubMed

    Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio

    2017-04-06

    We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice.

  12. Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns

    PubMed Central

    Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio

    2017-01-01

    We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice. PMID:28383482

  13. Importance of biometrics to addressing vulnerabilities of the U.S. infrastructure

    NASA Astrophysics Data System (ADS)

    Arndt, Craig M.; Hall, Nathaniel A.

    2004-08-01

    Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.

  14. Parametric Optimization of Thermoelectric Generators for Waste Heat Recovery

    NASA Astrophysics Data System (ADS)

    Huang, Shouyuan; Xu, Xianfan

    2016-10-01

    This paper presents a methodology for design optimization of thermoelectric-based waste heat recovery systems called thermoelectric generators (TEGs). The aim is to maximize the power output from thermoelectrics which are used as add-on modules to an existing gas-phase heat exchanger, without negative impacts, e.g., maintaining a minimum heat dissipation rate from the hot side. A numerical model is proposed for TEG coupled heat transfer and electrical power output. This finite-volume-based model simulates different types of heat exchangers, i.e., counter-flow and cross-flow, for TEGs. Multiple-filled skutterudites and bismuth-telluride-based thermoelectric modules (TEMs) are applied, respectively, in higher and lower temperature regions. The response surface methodology is implemented to determine the optimized TEG size along and across the flow direction and the height of thermoelectric couple legs, and to analyze their covariance and relative sensitivity. A genetic algorithm is employed to verify the globality of the optimum. The presented method will be generally useful for optimizing heat-exchanger-based TEG performance.

  15. Photovoltaic cells for laser power beaming

    NASA Technical Reports Server (NTRS)

    Landis, Geoffrey A.; Jain, Raj K.

    1992-01-01

    To better understand cell response to pulsed illumination at high intensity, the PC-1DC finite-element computer model was used to analyze the response of solar cells to pulsed laser illumination. Over 50% efficiency was calculated for both InP and GaAs cells under steady-state illumination near the optimum wavelength. The time-dependent response of a high-efficiency GaAs concentrator cell to a laser pulse was modelled, and the effect of laser intensity, wavelength, and bias point was studied. Designing a cell to accommodate pulsed input can be done either by accepting the pulsed output and designing a cell to minimize adverse effects due to series resistance and inductance, or to design a cell with a long enough minority carrier lifetime, so that the output of the cell will not follow the pulse shape. Two such design possibilities are a monolithic, low-inductance voltage-adding GaAs cell, or a high-efficiency, light-trapping silicon cell. The advantages of each design will be discussed.

  16. Efficient EM Simulation of GCPW Structures Applied to a 200-GHz mHEMT Power Amplifier MMIC

    NASA Astrophysics Data System (ADS)

    Campos-Roca, Yolanda; Amado-Rey, Belén; Wagner, Sandrine; Leuther, Arnulf; Bangert, Axel; Gómez-Alcalá, Rafael; Tessmann, Axel

    2017-05-01

    The behaviour of grounded coplanar waveguide (GCPW) structures in the upper millimeter-wave range is analyzed by using full-wave electromagnetic (EM) simulations. A methodological approach to develop reliable and time-efficient simulations is proposed by investigating the impact of different simplifications in the EM modelling and simulation conditions. After experimental validation with measurements on test structures, this approach has been used to model the most critical passive structures involved in the layout of a state-of-the-art 200-GHz power amplifier based on metamorphic high electron mobility transistors (mHEMTs). This millimeter-wave monolithic integrated circuit (MMIC) has demonstrated a measured output power of 8.7 dBm for an input power of 0 dBm at 200 GHz. The measured output power density and power-added efficiency (PAE) are 46.3 mW/mm and 4.5 %, respectively. The peak measured small-signal gain is 12.7 dB (obtained at 196 GHz). A good agreement has been obtained between measurements and simulation results.

  17. TLIFE: a Program for Spur, Helical and Spiral Bevel Transmission Life and Reliability Modeling

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Rubadeux, K. L.

    1994-01-01

    This report describes a computer program, 'TLIFE', which models the service life of a transmission. The program is written in ANSI standard Fortran 77 and has an executable size of about 157 K bytes for use on a personal computer running DOS. It can also be compiled and executed in UNIX. The computer program can analyze any one of eleven unit transmissions either singly or in a series combination of up to twenty-five unit transmissions. Metric or English unit calculations are performed with the same routines using consistent input data and a units flag. Primary outputs are the dynamic capacity of the transmission and the mean lives of the transmission and of the sum of its components. The program uses a modular approach to separate the load analyses from the system life calculations. The program and its input and output data files are described herein. Three examples illustrate its use. A development of the theory behind the analysis in the program is included after the examples.

  18. Efficiency measures and output specification : the case of European railways

    DOT National Transportation Integrated Search

    2000-12-01

    This study analyzes the sensitivity of the efficiency indicators of a sample of European railway companies to different alternatives in output specification. The results vary according to the specification selected. Investigating the causes of these ...

  19. The Lightning Nitrogen Oxides Model (LNOM): Status and Recent Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Khan, Maudood; Peterson, Harold

    2011-01-01

    Improvements to the NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) are discussed. Recent results from an August 2006 run of the Community Multiscale Air Quality (CMAQ) modeling system that employs LNOM lightning NOx (= NO + NO2) estimates are provided. The LNOM analyzes Lightning Mapping Array (LMA) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning NOx. The latest LNOM estimates of (a) lightning channel length distributions, (b) lightning 1-m segment altitude distributions, and (c) the vertical profile of NOx are presented. The impact of including LNOM-estimates of lightning NOx on CMAQ output is discussed.

  20. Parametric Analysis of Airland Combat Model in High Resolution

    DTIC Science & Technology

    1988-09-01

    continue Fprint MOE, UTILITY matrix figure 10. Flow chart of the advanced model 22 WAVE2 = numeric value (1. 2. or 12) which is supposed to be given by the...model user" if WAVE2 = 1. it will bc a BATTLE I case. and all Red forccs on Av’enue-2 attack to nodc-2S ; if \\VAVE2= 2. it will also be a BATTLE I case...but all Red forces on Aenue-2 attack to node-27 ; if WAVE2 = 12. it will be a BATTLE2 case. These outputs will be analyzed in more detail in the next

  1. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  2. PARALYZER FOR PULSE HEIGHT DISTRIBUTION ANALYZER

    DOEpatents

    Fairstein, E.

    1960-01-19

    A paralyzer circuit is described for use with a pulseheight distribution analyzer to prevent the analyzer from counting overlapping pulses where they would serve to provide a false indication. The paralyzer circuit comprises a pair of cathode-coupled amplifiers for amplifying pulses of opposite polarity. Diodes are provided having their anodes coupled to the separate outputs of the amplifiers to produce only positive signals, and a trigger circuit is coupled to the diodes ior operation by input pulses of either polarity from the amplifiers. A delay network couples the output of the trigger circuit for delaying the pulses.

  3. Mathematical modeling and characteristic analysis for over-under turbine based combined cycle engine

    NASA Astrophysics Data System (ADS)

    Ma, Jingxue; Chang, Juntao; Ma, Jicheng; Bao, Wen; Yu, Daren

    2018-07-01

    The turbine based combined cycle engine has become the most promising hypersonic airbreathing propulsion system for its superiority of ground self-starting, wide flight envelop and reusability. The simulation model of the turbine based combined cycle engine plays an important role in the research of performance analysis and control system design. In this paper, a turbine based combined cycle engine mathematical model is built on the Simulink platform, including a dual-channel air intake system, a turbojet engine and a ramjet. It should be noted that the model of the air intake system is built based on computational fluid dynamics calculation, which provides valuable raw data for modeling of the turbine based combined cycle engine. The aerodynamic characteristics of turbine based combined cycle engine in turbojet mode, ramjet mode and mode transition process are studied by the mathematical model, and the influence of dominant variables on performance and safety of the turbine based combined cycle engine is analyzed. According to the stability requirement of thrust output and the safety in the working process of turbine based combined cycle engine, a control law is proposed that could guarantee the steady output of thrust by controlling the control variables of the turbine based combined cycle engine in the whole working process.

  4. The role of blood flow distribution in the regulation of cerebral oxygen availability in fetal growth restriction.

    PubMed

    Luria, Oded; Bar, Jacob; Kovo, Michal; Malinger, Gustavo; Golan, Abraham; Barnea, Ofer

    2012-04-01

    Fetal growth restriction (FGR) elicits hemodynamic compensatory mechanisms in the fetal circulation. These mechanisms are complex and their effect on the cerebral oxygen availability is not fully understood. To quantify the contribution of each compensatory mechanism to the fetal cerebral oxygen availability, a mathematical model of the fetal circulation was developed. The model was based on cardiac-output distribution in the fetal circulation. The compensatory mechanisms of FGR were simulated and their effects on cerebral oxygen availability were analyzed. The mathematical analysis included the effects of cerebral vasodilation, placental resistance to blood flow, degree of blood shunting by the ductus venosus and the effect of maternal-originated placental insufficiency. The model indicated a unimodal dependency between placental blood flow and cerebral oxygen availability. Optimal cerebral oxygen availability was achieved when the placental blood flow was mildly reduced compared to the normal flow. This optimal ratio was found to increase as the hypoxic state of FGR worsens. The model indicated that cerebral oxygen availability is increasingly dependent on the cardiac output distribution as the fetus gains weight. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Sensor trustworthiness in uncertain time varying stochastic environments

    NASA Astrophysics Data System (ADS)

    Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan

    2011-06-01

    Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.

  6. Cycle Analysis of a New Air Engine Design

    NASA Astrophysics Data System (ADS)

    Attar, Wiam Fadi

    This thesis investigates a new externally heated engine design being developed by Soony Systems Inc. to serve as the prime mover in a residential-scale combined heat and power system. This is accomplished by developing a thermodynamic model for the engine and sweeping through the design parameter space in order to identify designs that maximize power output, efficiency, and brake mean effective pressure (BMEP). It was discovered that the original engine design was flawed so a new design was proposed and analyzed. The thermodynamic model was developed in four stages. The first model was quasi-static while the other three were time-dependent and used increasingly realistic models of the heat exchangers. For the range of design parameters investigated here, the peak power output is 6.8 kW, the peak efficiency is approximately 60%, and the peak BMEP is 389 kPa. These performance levels are compared to those of other closed-cycle engines. The results suggest that the Soony engine has the potential to be more efficient than Stirlings because it more closely approximates the Carnot cycle, but this comes at the cost of significantly lower BMEP (389 kPa vs. 2,760 kPa for the SOLO Stirling engine).

  7. Characteristics of Tropical Cyclones in High-Resolution Models of the Present Climate

    NASA Technical Reports Server (NTRS)

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffery A.; Kim, Daeyhun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Roberts, Malcolm J.; hide

    2014-01-01

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) in two types of experiments, using a climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.

  8. Characteristics of Tropical Cyclones in High-resolution Models in the Present Climate

    NASA Technical Reports Server (NTRS)

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; Jonas, Jeffrey A.; Kim, Daehyun; Kumar, Arun; LaRow, Timothy E.; Lim, Young-Kwon; Murakami, Hiroyuki; Reed, Kevin; hide

    2014-01-01

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TC frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.

  9. Modelling Market Dynamics with a "Market Game"

    NASA Astrophysics Data System (ADS)

    Katahira, Kei; Chen, Yu

    In the financial market, traders, especially speculators, typically behave as to yield capital gains by the difference between selling and buying prices. Making use of the structure of Minority Game, we build a novel market toy model which takes account of such the speculative mind involving a round-trip trade to analyze the market dynamics as a system. Even though the micro-level behavioral rules of players in this new model is quite simple, its macroscopic aggregational output has the reproducibility of the well-known stylized facts such as volatility clustering and heavy tails. The proposed model may become a new alternative bottom-up approach in order to study the emerging mechanism of those stylized qualitative properties of asset returns.

  10. Analyzing wildfire exposure on Sardinia, Italy

    NASA Astrophysics Data System (ADS)

    Salis, Michele; Ager, Alan A.; Arca, Bachisio; Finney, Mark A.; Alcasena, Fermin; Bacciu, Valentina; Duce, Pierpaolo; Munoz Lozano, Olga; Spano, Donatella

    2014-05-01

    We used simulation modeling based on the minimum travel time algorithm (MTT) to analyze wildfire exposure of key ecological, social and economic features on Sardinia, Italy. Sardinia is the second largest island of the Mediterranean Basin, and in the last fifty years experienced large and dramatic wildfires, which caused losses and threatened urban interfaces, forests and natural areas, and agricultural productions. Historical fires and environmental data for the period 1995-2009 were used as input to estimate fine scale burn probability, conditional flame length, and potential fire size in the study area. With this purpose, we simulated 100,000 wildfire events within the study area, randomly drawing from the observed frequency distribution of burn periods and wind directions for each fire. Estimates of burn probability, excluding non-burnable fuels, ranged from 0 to 1.92x10-3, with a mean value of 6.48x10-5. Overall, the outputs provided a quantitative assessment of wildfire exposure at the landscape scale and captured landscape properties of wildfire exposure. We then examined how the exposure profiles varied among and within selected features and assets located on the island. Spatial variation in modeled outputs resulted in a strong effect of fuel models, coupled with slope and weather. In particular, the combined effect of Mediterranean maquis, woodland areas and complex topography on flame length was relevant, mainly in north-east Sardinia, whereas areas with herbaceous fuels and flat areas were in general characterized by lower fire intensity but higher burn probability. The simulation modeling proposed in this work provides a quantitative approach to inform wildfire risk management activities, and represents one of the first applications of burn probability modeling to capture fire risk and exposure profiles in the Mediterranean basin.

  11. Geospatial modeling approach to monument construction using Michigan from A.D. 1000-1600 as a case study.

    PubMed

    Howey, Meghan C L; Palace, Michael W; McMichael, Crystal H

    2016-07-05

    Building monuments was one way that past societies reconfigured their landscapes in response to shifting social and ecological factors. Understanding the connections between those factors and monument construction is critical, especially when multiple types of monuments were constructed across the same landscape. Geospatial technologies enable past cultural activities and environmental variables to be examined together at large scales. Many geospatial modeling approaches, however, are not designed for presence-only (occurrence) data, which can be limiting given that many archaeological site records are presence only. We use maximum entropy modeling (MaxEnt), which works with presence-only data, to predict the distribution of monuments across large landscapes, and we analyze MaxEnt output to quantify the contributions of spatioenvironmental variables to predicted distributions. We apply our approach to co-occurring Late Precontact (ca. A.D. 1000-1600) monuments in Michigan: (i) mounds and (ii) earthwork enclosures. Many of these features have been destroyed by modern development, and therefore, we conducted archival research to develop our monument occurrence database. We modeled each monument type separately using the same input variables. Analyzing variable contribution to MaxEnt output, we show that mound and enclosure landscape suitability was driven by contrasting variables. Proximity to inland lakes was key to mound placement, and proximity to rivers was key to sacred enclosures. This juxtaposition suggests that mounds met local needs for resource procurement success, whereas enclosures filled broader regional needs for intergroup exchange and shared ritual. Our study shows how MaxEnt can be used to develop sophisticated models of past cultural processes, including monument building, with imperfect, limited, presence-only data.

  12. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  13. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  14. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  15. Climate Model Diagnostic Analyzer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei

    2015-01-01

    The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaustad, K.L.; De Steese, J.G.

    A computer program was developed to analyze the viability of integrating superconducting magnetic energy storage (SMES) with proposed wind farm scenarios at a site near Browning, Montana. The program simulated an hour-by-hour account of the charge/discharge history of a SMES unit for a representative wind-speed year. Effects of power output, storage capacity, and power conditioning capability on SMES performance characteristics were analyzed on a seasonal, diurnal, and hourly basis. The SMES unit was assumed to be charged during periods when power output of the wind resource exceeded its average value. Energy was discharged from the SMES unit into the gridmore » during periods of low wind speed to compensate for below-average output of the wind resource. The option of using SMES to provide power continuity for a wind farm supplemented by combustion turbines was also investigated. Levelizing the annual output of large wind energy systems operating in the Blackfeet area of Montana was found to require a storage capacity too large to be economically viable. However, it appears that intermediate-sized SMES economically levelize the wind energy output on a seasonal basis.« less

  17. A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.

    PubMed

    Jankovic, M V

    2003-01-01

    A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.

  18. When causality does not imply correlation: more spadework at the foundations of scientific psychology.

    PubMed

    Marken, Richard S; Horth, Brittany

    2011-06-01

    Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.

  19. Electrical safety of conducted electrical weapons relative to requirements of relevant electrical standards.

    PubMed

    Panescu, Dorin; Nerheim, Max; Kroll, Mark

    2013-01-01

    TASER(®) conducted electrical weapons (CEW) deliver electrical pulses that can inhibit a person's neuromuscular control or temporarily incapacitate. TASER X26, X26P, and X2 are among CEW models most frequently deployed by law enforcement agencies. The X2 CEW uses two cartridge bays while the X26 and X26P CEWs have only one. The TASER X26P CEW electronic output circuit design is equivalent to that of any one of the two TASER X2 outputs. The goal of this paper was to analyze the nominal electrical outputs of TASER X26, X26P, and X2 CEWs in reference to provisions of several international standards that specify safety requirements for electrical medical devices and electrical fences. Although these standards do not specifically mention CEWs, they are the closest electrical safety standards and hence give very relevant guidance. The outputs of two TASER X26 and two TASER X2 CEWs were measured and confirmed against manufacturer and other published specifications. The TASER X26, X26P, and X2 CEWs electrical output parameters were reviewed against relevant safety requirements of UL 69, IEC 60335-2-76 Ed 2.1, IEC 60479-1, IEC 60479-2, AS/NZS 60479.1, AS/NZS 60479.2 and IEC 60601-1. Prior reports on similar topics were reviewed as well. Our measurements and analyses confirmed that the nominal electrical outputs of TASER X26, X26P and X2 CEWs lie within safety bounds specified by relevant requirements of the above standards.

  20. RESULTS FROM KINEROS STREAM CHANNEL ELEMENTS MODEL OUTPUT THROUGH AGWA DIFFERENCING 1973 AND 1997 NALC LANDCOVER DATA

    EPA Science Inventory

    Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.

  1. Modeling and Dynamic Analysis of Paralleled of dc/dc Converters with Master-Slave Current Sharing Control

    NASA Technical Reports Server (NTRS)

    Rajagopalan, J.; Xing, K.; Guo, Y.; Lee, F. C.; Manners, Bruce

    1996-01-01

    A simple, application-oriented, transfer function model of paralleled converters employing Master-Slave Current-sharing (MSC) control is developed. Dynamically, the Master converter retains its original design characteristics; all the Slave converters are forced to depart significantly from their original design characteristics into current-controlled current sources. Five distinct loop gains to assess system stability and performance are identified and their physical significance is described. A design methodology for the current share compensator is presented. The effect of this current sharing scheme on 'system output impedance' is analyzed.

  2. Settlement Relocation Modeling: Reacting to Merapi’s Eruption Incident

    NASA Astrophysics Data System (ADS)

    Pramitasari, A.; Buchori, I.

    2018-02-01

    Merapi eruption has made severe damages in Central Java Province. Klaten was one of the most affected area, specifically in Balerante Village. This research is made to comprehend GIS model on finding alternative locations for impacted settlement in hazardous zones of eruption. The principal objective of the research study is to identify and analyze physical condition, community characteristics, and local government regulation related to settlements relocation plan for impacted area of eruption. The output is location map which classified into four categories, i.e. not available, available with low accessibility, available with medium accessibility, and available with high accessibility.

  3. Computer aided system engineering and analysis (CASE/A) modeling package for ECLS systems - An overview

    NASA Technical Reports Server (NTRS)

    Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.

    1990-01-01

    An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.

  4. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  5. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  6. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-07

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Analysis of urban metabolic processes based on input-output method: model development and a case study for Beijing

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Liu, Hong; Chen, Bin; Zheng, Hongmei; Li, Yating

    2014-06-01

    Discovering ways in which to increase the sustainability of the metabolic processes involved in urbanization has become an urgent task for urban design and management in China. As cities are analogous to living organisms, the disorders of their metabolic processes can be regarded as the cause of "urban disease". Therefore, identification of these causes through metabolic process analysis and ecological element distribution through the urban ecosystem's compartments will be helpful. By using Beijing as an example, we have compiled monetary input-output tables from 1997, 2000, 2002, 2005, and 2007 and calculated the intensities of the embodied ecological elements to compile the corresponding implied physical input-output tables. We then divided Beijing's economy into 32 compartments and analyzed the direct and indirect ecological intensities embodied in the flows of ecological elements through urban metabolic processes. Based on the combination of input-output tables and ecological network analysis, the description of multiple ecological elements transferred among Beijing's industrial compartments and their distribution has been refined. This hybrid approach can provide a more scientific basis for management of urban resource flows. In addition, the data obtained from distribution characteristics of ecological elements may provide a basic data platform for exploring the metabolic mechanism of Beijing.

  8. The Impact of Input and Output Prices on The Household Economic Behavior of Rice-Livestock Integrated Farming System (Rlifs) and Non Rlifs Farmers

    NASA Astrophysics Data System (ADS)

    Lindawati, L.; Kusnadi, N.; Kuntjoro, S. U.; Swastika, D. K. S.

    2018-02-01

    Integrated farming system is a system that emphasized linkages and synergism of farming units waste utilization. The objective of the study was to analyze the impact of input and output prices on both Rice Livestock Integrated Farming System (RLIFS) and non RLIFS farmers. The study used econometric model in the form of a simultaneous equations system consisted of 36 equations (18 behavior and 18 identity equations). The impact of changes in some variables was obtained through simulation of input and output prices on simultaneous equations. The results showed that the price increasing of the seed, SP-36, urea, medication/vitamins, manure, bran, straw had negative impact on production of the rice, cow, manure, bran, straw and household income. The decrease in the rice and cow production, production input usage, allocation of family labor, rice and cow business income was greater in RLIFS than non RLIFS farmers. The impact of rising rice and cow cattle prices in the two groups of farmers was not too much different because (1) farming waste wasn’t used effectively (2) manure and straw had small proportion of production costs. The increase of input and output price didn’t have impact on production costs and household expenditures on RLIFS.

  9. Method and apparatus for automatically generating airfoil performance tables

    NASA Technical Reports Server (NTRS)

    van Dam, Cornelis P. (Inventor); Mayda, Edward A. (Inventor); Strawn, Roger Clayton (Inventor)

    2006-01-01

    One embodiment of the present invention provides a system that facilitates automatically generating a performance table for an object, wherein the object is subject to fluid flow. The system operates by first receiving a description of the object and testing parameters for the object. The system executes a flow solver using the testing parameters and the description of the object to produce an output. Next, the system determines if the output of the flow solver indicates negative density or pressure. If not, the system analyzes the output to determine if the output is converging. If converging, the system writes the output to the performance table for the object.

  10. Composite blade structural analyzer (COBSTRAN) user's manual

    NASA Technical Reports Server (NTRS)

    Aiello, Robert A.

    1989-01-01

    The installation and use of a computer code, COBSTRAN (COmposite Blade STRuctrual ANalyzer), developed for the design and analysis of composite turbofan and turboprop blades and also for composite wind turbine blades was described. This code combines composite mechanics and laminate theory with an internal data base of fiber and matrix properties. Inputs to the code are constituent fiber and matrix material properties, factors reflecting the fabrication process, composite geometry and blade geometry. COBSTRAN performs the micromechanics, macromechanics and laminate analyses of these fiber composites. COBSTRAN generates a NASTRAN model with equivalent anisotropic homogeneous material properties. Stress output from NASTRAN is used to calculate individual ply stresses, strains, interply stresses, thru-the-thickness stresses and failure margins. Curved panel structures may be modeled providing the curvature of a cross-section is defined by a single value function. COBSTRAN is written in FORTRAN 77.

  11. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  12. Arctic Ocean Model Intercomparison Using Sound Speed

    NASA Astrophysics Data System (ADS)

    Dukhovskoy, D. S.; Johnson, M. A.

    2002-05-01

    The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.

  13. Economy-wide material input/output and dematerialization analysis of Jilin Province (China).

    PubMed

    Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun

    2010-06-01

    In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.

  14. Algorithms for output feedback, multiple-model, and decentralized control problems

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.

  15. Comparative study on life cycle environmental impact assessment of copper and aluminium cables

    NASA Astrophysics Data System (ADS)

    Bao, Wei; Lin, Ling; Song, Dan; Guo, Huiting; Chen, Liang; Sun, Liang; Liu, Mei; Chen, Jianhua

    2017-11-01

    With the rapid development of industrialization and urbanization in China, domestic demands for copper and aluminium resources increase continuously and the output of copper and aluminium minerals rises steadily. The output of copper in China increased from 0.6 million tons (metal quantity) in 2003 to 1.74 million tons (metal quantity) in 2014, and the output of bauxite increased from 21 million tons in 2006 to 59.21 million tons in 2014. In the meantime, the import of copper and aluminium minerals of China is also on a rise. The import of copper concentrate and bauxite increased from 4.94 million tons and 9.68 million tons in 2006 to 10.08 million tons and 70.75 million tons in 2013 respectively. Copper and aluminium resources are widely applied in fields such as construction, electrical and electronics, machinery manufacturing, and transportation, and serve as important material basis for the national economic and social development of China. Cable industry is a typical industry where copper and aluminium resources are widely used. In this paper, a product assessment model is built from the perspective of product life cycle. Based on CNLCD database, differences in environmental impacts of copper and aluminium cables are analyzed from aspects such as resource acquisition, product production, transportation, utilization, and resource recycling. Furthermore, the advantages and disadvantages of products at different stages with different types of environmental impact are analyzed, so as to provide data support for cable industry in terms of product design and production, etc.

  16. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    NASA Astrophysics Data System (ADS)

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  17. A topological proof of chaos for two nonlinear heterogeneous triopoly game models.

    PubMed

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  18. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  19. Blanks: a computer program for analyzing furniture rough-part needs in standard-size blanks

    Treesearch

    Philip A. Araman

    1983-01-01

    A computer program is described that allows a company to determine the number of edge-glued, standard-size blanks required to satisfy its rough-part needs for a given production period. Yield and cost information also is determined by the program. A list of the program inputs, outputs, and uses of outputs is described, and an example analysis with sample output is...

  20. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  1. Impacts of weighting climate models for hydro-meteorological climate change studies

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe; Caya, Daniel

    2017-06-01

    Weighting climate models is controversial in climate change impact studies using an ensemble of climate simulations from different climate models. In climate science, there is a general consensus that all climate models should be considered as having equal performance or in other words that all projections are equiprobable. On the other hand, in the impacts and adaptation community, many believe that climate models should be weighted based on their ability to better represent various metrics over a reference period. The debate appears to be partly philosophical in nature as few studies have investigated the impact of using weights in projecting future climate changes. The present study focuses on the impact of assigning weights to climate models for hydrological climate change studies. Five methods are used to determine weights on an ensemble of 28 global climate models (GCMs) adapted from the Coupled Model Intercomparison Project Phase 5 (CMIP5) database. Using a hydrological model, streamflows are computed over a reference (1961-1990) and future (2061-2090) periods, with and without post-processing climate model outputs. The impacts of using different weighting schemes for GCM simulations are then analyzed in terms of ensemble mean and uncertainty. The results show that weighting GCMs has a limited impact on both projected future climate in term of precipitation and temperature changes and hydrology in terms of nine different streamflow criteria. These results apply to both raw and post-processed GCM model outputs, thus supporting the view that climate models should be considered equiprobable.

  2. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  3. Detection of Bi-Directionality in Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2012-01-01

    An indicator variable was developed for both visualization and detection of bi-directionality in wind tunnel strain-gage balance calibration data. First, the calculation of the indicator variable is explained in detail. Then, a criterion is discussed that may be used to decide which gage outputs of a balance have bi- directional behavior. The result of this analysis could be used, for example, to justify the selection of certain absolute value or other even function terms in the regression model of gage outputs whenever the Iterative Method is chosen for the balance calibration data analysis. Calibration data of NASA s MK40 Task balance is analyzed to illustrate both the calculation of the indicator variable and the application of the proposed criterion. Finally, bi directionality characteristics of typical multi piece, hybrid, single piece, and semispan balances are determined and discussed.

  4. Theoretical and experimental investigations on high peak power Q-switched Nd:YAG laser at 1112 nm

    NASA Astrophysics Data System (ADS)

    He, Miao; Yang, Feng; Wang, Zhi-Chao; Gao, Hong-Wei; Yuan, Lei; Li, Chen-Long; Zong, Nan; Shen, Yu; Bo, Yong; Peng, Qin-Jun; Cui, Da-Fu; Xu, Zu-Yan

    2018-07-01

    We report on the experimental measurement and theoretical analysis on a Q-switched high peak power laser diode (LD) side-pumped 1112 nm Nd:YAG laser by means of special mirrors coating design in cavity. In theory, a numerical model, based on four-wavelength rate equations, is performed to analyze the competition process of different gain lines and the output characteristics of the Q-switched Nd:YAG laser. In the experiment, a maximum output power of 25.2 W with beam quality factor M2 of 1.46 is obtained at the pulse repetition rate of 2 kHz and 210 ns of pulse width, corresponding to a pulse energy and peak power of 12.6 mJ and 60 kW, respectively. The experimental data agree well with the theoretical simulation results.

  5. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  6. Empirical relationships among oliguria, creatinine, mortality, and renal replacement therapy in the critically ill.

    PubMed

    Mandelbaum, Tal; Lee, Joon; Scott, Daniel J; Mark, Roger G; Malhotra, Atul; Howell, Michael D; Talmor, Daniel

    2013-03-01

    The observation periods and thresholds of serum creatinine and urine output defined in the Acute Kidney Injury Network (AKIN) criteria were not empirically derived. By continuously varying creatinine/urine output thresholds as well as the observation period, we sought to investigate the empirical relationships among creatinine, oliguria, in-hospital mortality, and receipt of renal replacement therapy (RRT). Using a high-resolution database (Multiparameter Intelligent Monitoring in Intensive Care II), we extracted data from 17,227 critically ill patients with an in-hospital mortality rate of 10.9 %. The 14,526 patients had urine output measurements. Various combinations of creatinine/urine output thresholds and observation periods were investigated by building multivariate logistic regression models for in-hospital mortality and RRT predictions. For creatinine, both absolute and percentage increases were analyzed. To visualize the dependence of adjusted mortality and RRT rate on creatinine, the urine output, and the observation period, we generated contour plots. Mortality risk was high when absolute creatinine increase was high regardless of the observation period, when percentage creatinine increase was high and the observation period was long, and when oliguria was sustained for a long period of time. Similar contour patterns emerged for RRT. The variability in predictive accuracy was small across different combinations of thresholds and observation periods. The contour plots presented in this article complement the AKIN definition. A multi-center study should confirm the universal validity of the results presented in this article.

  7. Analysis of a utility-interactive wind-photovoltaic hybrid system with battery storage using neural network

    NASA Astrophysics Data System (ADS)

    Giraud, Francois

    1999-10-01

    This dissertation investigates the application of neural network theory to the analysis of a 4-kW Utility-interactive Wind-Photovoltaic System (WPS) with battery storage. The hybrid system comprises a 2.5-kW photovoltaic generator and a 1.5-kW wind turbine. The wind power generator produces power at variable speed and variable frequency (VSVF). The wind energy is converted into dc power by a controlled, tree-phase, full-wave, bridge rectifier. The PV power is maximized by a Maximum Power Point Tracker (MPPT), a dc-to-dc chopper, switching at a frequency of 45 kHz. The whole dc power of both subsystems is stored in the battery bank or conditioned by a single-phase self-commutated inverter to be sold to the utility at a predetermined amount. First, the PV is modeled using Artificial Neural Network (ANN). To reduce model uncertainty, the open-circuit voltage VOC and the short-circuit current ISC of the PV are chosen as model input variables of the ANN. These input variables have the advantage of incorporating the effects of the quantifiable and non-quantifiable environmental variants affecting the PV power. Then, a simplified way to predict accurately the dynamic responses of the grid-linked WPS to gusty winds using a Recurrent Neural Network (RNN) is investigated. The RNN is a single-output feedforward backpropagation network with external feedback, which allows past responses to be fed back to the network input. In the third step, a Radial Basis Functions (RBF) Network is used to analyze the effects of clouds on the Utility-Interactive WPS. Using the irradiance as input signal, the network models the effects of random cloud movement on the output current, the output voltage, the output power of the PV system, as well as the electrical output variables of the grid-linked inverter. Fourthly, using RNN, the combined effects of a random cloud and a wind gusts on the system are analyzed. For short period intervals, the wind speed and the solar radiation are considered as the sole sources of power, whose variations influence the system variables. Since both subsystems have different dynamics, their respective responses are expected to impact differently the whole system behavior. The dispatchability of the battery-supported system as well as its stability and reliability during gusts and/or cloud passage is also discussed. In the fifth step, the goal is to determine to what extent the overall power quality of the grid would be affected by a proliferation of Utility-interactive hybrid system and whether recourse to bulky or individual filtering and voltage controller is necessary. The final stage of the research includes a steady-state analysis of two-year operation (May 96--Apr 98) of the system, with a discussion on system reliability, on any loss of supply probability, and on the effects of the randomness in the wind and solar radiation upon the system design optimization.

  8. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  9. Characteristics of tropical cyclones in high-resolution models in the present climate

    DOE PAGES

    Shaevitz, Daniel A.; Camargo, Suzana J.; Sobel, Adam H.; ...

    2014-12-05

    The global characteristics of tropical cyclones (TCs) simulated by several climate models are analyzed and compared with observations. The global climate models were forced by the same sea surface temperature (SST) fields in two types of experiments, using climatological SST and interannually varying SST. TC tracks and intensities are derived from each model's output fields by the group who ran that model, using their own preferred tracking scheme; the study considers the combination of model and tracking scheme as a single modeling system, and compares the properties derived from the different systems. Overall, the observed geographic distribution of global TCmore » frequency was reasonably well reproduced. As expected, with the exception of one model, intensities of the simulated TC were lower than in observations, to a degree that varies considerably across models.« less

  10. Supercooled Liquid Water Content Instrument Analysis and Winter 2014 Data with Comparisons to the NASA Icing Remote Sensing System and Pilot Reports

    NASA Technical Reports Server (NTRS)

    King, Michael C.

    2016-01-01

    The National Aeronautics and Space Administration (NASA) has developed a system for remotely detecting the hazardous conditions leading to aircraft icing in flight, the NASA Icing Remote Sensing System (NIRSS). Newly developed, weather balloon-borne instruments have been used to obtain in-situ measurements of supercooled liquid water during March 2014 to validate the algorithms used in the NIRSS. A mathematical model and a processing method were developed to analyze the data obtained from the weather balloon soundings. The data from soundings obtained in March 2014 were analyzed and compared to the output from the NIRSS and pilot reports.

  11. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  12. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less

  13. Simulation Development and Analysis of Crew Vehicle Ascent Abort

    NASA Technical Reports Server (NTRS)

    Wong, Chi S.

    2016-01-01

    NASA's Commercial Crew Program is an integral step in its journey to Mars as it would expedite development of space technologies and open up partnership with U.S. commercial companies. NASA reviews and independent assessment of Commercial Crew Program is fundamental to its success, and being able to model a commercial crew vehicle in a simulation rather than conduct a live test would be a safer, faster, and less expensive way to assess and certify the capabilities of the vehicle. To this end, my project was to determine the feasibility of using a simulation tool named SOMBAT version 2.0 to model a multiple parachute system for Commercial Crew Program simulation. The main tasks assigned to me were to debug and test the main parachute system model, (capable of simulating one to four main parachute bodies), and to utilize a graphical program to animate the simulation results. To begin tackling the first task, I learned how to use SOMBAT by familiarizing myself with its mechanics and by understanding the methods used to tweak its various parameters and outputs. I then used this new knowledge to set up, run, and analyze many different situations within SOMBAT in order to explore the limitations of the parachute model. Some examples of parameters that I varied include the initial velocity and orientation of the falling capsule, the number of main parachutes, and the location where the parachutes were attached to the capsule. Each parameter changed would give a different output, and in some cases, would expose a bug or limitation in the model. A major bug that I discovered was the inability of the model to handle any number of parachutes other than three. I spent quite some time trying to debug the code logically, but was unable to figure it out until my mentor taught me that digital simulation limitations can occur when some approximations are mistakenly assumed for certain in a physical system. This led me to the realization that unlike in all of the programming classes I have taken thus far that focus on pure logic, simulation code focuses on mimicking the physical world with some approximation and can have inaccuracies or numerical instabilities. Learning from my mistake, I adopted new methods to analyze these different simulations. One method the student used was to numerically plot various physical parameters using MATLAB to confirm the mechanical behavior of the system in addition to comparing the data to the output from a separate simulation tool called FAST. By having full control over what was being outputted from the simulation, I could choose which parameters to change and to plot as well as how to plot them, allowing for an in depth analysis of the data. Another method of analysis was to convert the output data into a graphical animation. Unlike the numerical plots, where all of the physical components were displayed separately, this graphical display allows for a combined look at the simulation output that makes it much easier for one to see the physical behavior of the model. The process for converting SOMBAT output for EDGE graphical display had to be developed. With some guidance from other EDGE users, I developed a process and created a script that would easily allow one to display simulations graphically. Another limitation with the SOMBAT model was the inability for the capsule to have the main parachutes instantly deployed with a large angle between the air speed vector and the chutes drag vector. To explore this problem, I had to learn about different coordinate frames used in Guidance, Navigation & Control (J2000, ECEF, ENU, etc.) to describe the motion of a vehicle and about Euler angles (e.g. Roll, Pitch, Yaw) to describe the orientation of the vehicle. With a thorough explanation from my mentor about the description of each coordinate frame, as well as how to use a directional cosine matrix to transform one frame to another, I investigated the problem by simulating different capsule orientations. In the end, I was able to show that this limitation could be avoided if the capsule is initially oriented antiparallel to its velocity vector.

  14. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  15. Pandemic recovery analysis using the dynamic inoperability input-output model.

    PubMed

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  16. Inflammable Gas Mixture Detection with a Single Catalytic Sensor Based on the Electric Field Effect

    PubMed Central

    Tong, Ziyuan; Tong, Min-Ming; Meng, Wen; Li, Meng

    2014-01-01

    This paper introduces a new way to analyze mixtures of inflammable gases with a single catalytic sensor. The analysis technology was based on a new finding that an electric field on the catalytic sensor can change the output sensitivity of the sensor. The analysis of mixed inflammable gases results from processing the output signals obtained by adjusting the electric field parameter of the catalytic sensor. For the signal process, we designed a group of equations based on the heat balance of catalytic sensor expressing the relationship between the output signals and the concentration of gases. With these equations and the outputs of different electric fields, the gas concentration in a mixture could be calculated. In experiments, a mixture of methane, butane and ethane was analyzed by this new method, and the results showed that the concentration of each gas in the mixture could be detected with a single catalytic sensor, and the maximum relative error was less than 5%. PMID:24717635

  17. Empirical and modeled synoptic cloud climatology of the Arctic Ocean

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Newell, J. P.; Schweiger, A.; Crane, R. G.

    1986-01-01

    A set of cloud cover data were developed for the Arctic during the climatically important spring/early summer transition months. Parallel with the determination of mean monthly cloud conditions, data for different synoptic pressure patterns were also composited as a means of evaluating the role of synoptic variability on Arctic cloud regimes. In order to carry out this analysis, a synoptic classification scheme was developed for the Arctic using an objective typing procedure. A second major objective was to analyze model output of pressure fields and cloud parameters from a control run of the Goddard Institue for Space Studies climate model for the same area and to intercompare the synoptic climatatology of the model with that based on the observational data.

  18. System dynamic modeling: an alternative method for budgeting.

    PubMed

    Srijariya, Witsanuchai; Riewpaiboon, Arthorn; Chaikledkaew, Usa

    2008-03-01

    To construct, validate, and simulate a system dynamic financial model and compare it against the conventional method. The study was a cross-sectional analysis of secondary data retrieved from the National Health Security Office (NHSO) in the fiscal year 2004. The sample consisted of all emergency patients who received emergency services outside their registered hospital-catchments area. The dependent variable used was the amount of reimbursed money. Two types of model were constructed, namely, the system dynamic model using the STELLA software and the multiple linear regression model. The outputs of both methods were compared. The study covered 284,716 patients from various levels of providers. The system dynamic model had the capability of producing various types of outputs, for example, financial and graphical analyses. For the regression analysis, statistically significant predictors were composed of service types (outpatient or inpatient), operating procedures, length of stay, illness types (accident or not), hospital characteristics, age, and hospital location (adjusted R(2) = 0.74). The total budget arrived at from using the system dynamic model and regression model was US$12,159,614.38 and US$7,301,217.18, respectively, whereas the actual NHSO reimbursement cost was US$12,840,805.69. The study illustrated that the system dynamic model is a useful financial management tool, although it is not easy to construct. The model is not only more accurate in prediction but is also more capable of analyzing large and complex real-world situations than the conventional method.

  19. Consumption-based Total Suspended Particulate Matter Emissions in Jing-Jin-Ji Area of China

    NASA Astrophysics Data System (ADS)

    Yang, S.; Chen, S.; Chen, B.

    2014-12-01

    The highly-industrialized regions in China have been facing a serious problem of haze mainly consisted of total suspended particulate matter (TSPM), which has attracted great attention from the public since it directly impairs human health and clinically increases the risks of various respiratory and pulmonary diseases. In this paper, we set up a multi-regional input-output (MRIO) model to analyze the transferring routes of TSPM emissions between regions through trades. TSPM emission from particulate source regions and sectors are identified by analyzing the embodied TSPM flows through monetary flow and carbon footprint. The track of TSPM from origin to end via consumption activities are also revealed by tracing the product supply chain associated with the TSPM emissions. Beijing-Tianjin-Hebei (Jing-Jin-Ji) as the most industrialized area of China is selected for a case study. The result shows that over 70% of TSPM emissions associated with goods consumed in Beijing and Tianjin occurred outside of their own administrative boundaries, implying that Beijing and Tianjin are net embodied TSPM importers. Meanwhile, 63% of the total TSPM emissions in Hebei Province are resulted from the outside demand, indicating Hebei is a net exporter. In addition, nearly half of TSPM emissions are the by-products related to electricity and heating supply and non-metal mineral products in Jing-Jin-Ji Area. Based on the model results, we provided new insights into establishing systemic strategies and identifying mitigation priorities to stem TSPM emissions in China. Keywords: total suspended particulate matter (TSPM); urban ecosystem modeling; multi-regional input-output (MRIO); China

  20. Observed & Modeled Changes in the Onset of Spring: A Preliminary Comparative Analysis by Geographic Regions of the USA

    NASA Astrophysics Data System (ADS)

    Enquist, C.

    2012-12-01

    Phenology, the study of seasonal life cycle events in plants and animals, is a well-recognized indicator of climate change impacts on people and nature. Models, experiments, and observational studies show changes in plant and animal phenology as a function of environmental change. Current research aims to improve our understanding of changes by enhancing existing models, analyzing observations, synthesizing previous research, and comparing outputs. Local to regional climatology is a critical driver of phenological variation of organisms across scales. Because plants respond to the cumulative effects of daily weather over an extended period, timing of life cycle events are effective integrators of climate data. One specific measure, leaf emergence, is particularly important because it often shows a strong response to temperature change, and is crucial for assessment of processes related to start and duration of the growing season. Schwartz et al. (2006) developed a suite of models (the "Spring Indices") linking plant development from historical data from leafing and flowering of cloned lilac and honeysuckle with basic climatic drivers to monitor changes related to the start of the spring growing season. These models can be generated at any location that has daily max-min temperature time series. The new version of these models is called the "Extended Spring Indices," or SI-x (Schwartz et al. in press). The SI-x model output (first leaf date and first bloom date) are produced similarly to the original models (SI-o), but do not incorporate accumulated chilling hours; rather energy accumulation starts for all stations on January 1. This change extends the locations SI model output can be generated into the sub-tropics, allowing full coverage of the conterminous USA. Both SI model versions are highly correlated, with mean bias and mean absolute differences around two days or less, and a similar bias and absolute errors when compared to cloned lilac data. To qualitatively test SI-x output and synthesize climate-linked regional variation in phenological events across the United States, we conducted a review of the recent phenology literature and assembled this information into 8 geographic regions. Additionally, we compared these outputs to analyses of species data found in the USA National Phenology Network database. We found that (1) all outputs showed advancement of spring onset across regions and taxa, despite great variability in species and site-level response, (2) many studies suggest that there may be evolutionary selection for organisms that track climatic changes, (3) although some organisms may benefit from lengthening growing seasons, there may be a cost, such as susceptibility to late frost, or "false springs," and (4) invasive organisms may have more capacity to track these changes than natives. More work is needed to (1) better understand precipitation and hydrology related cues and (2) understand the demographic consequences of trophic mismatch and effects on ecosystem processes and services. Next steps in this research include performing quantitative analyses to further explore if SI-x can be used to indicate and forecast changes in ecological and hydrological processes across geographic regions.

  1. Extending Landauer's bound from bit erasure to arbitrary computation

    NASA Astrophysics Data System (ADS)

    Wolpert, David

    The minimal thermodynamic work required to erase a bit, known as Landauer's bound, has been extensively investigated both theoretically and experimentally. However, when viewed as a computation that maps inputs to outputs, bit erasure has a very special property: the output does not depend on the input. Existing analyses of thermodynamics of bit erasure implicitly exploit this property, and thus cannot be directly extended to analyze the computation of arbitrary input-output maps. Here we show how to extend these earlier analyses of bit erasure to analyze the thermodynamics of arbitrary computations. Doing this establishes a formal connection between the thermodynamics of computers and much of theoretical computer science. We use this extension to analyze the thermodynamics of the canonical ``general purpose computer'' considered in computer science theory: a universal Turing machine (UTM). We consider a UTM which maps input programs to output strings, where inputs are drawn from an ensemble of random binary sequences, and prove: i) The minimal work needed by a UTM to run some particular input program X and produce output Y is the Kolmogorov complexity of Y minus the log of the ``algorithmic probability'' of Y. This minimal amount of thermodynamic work has a finite upper bound, which is independent of the output Y, depending only on the details of the UTM. ii) The expected work needed by a UTM to compute some given output Y is infinite. As a corollary, the overall expected work to run a UTM is infinite. iii) The expected work needed by an arbitrary Turing machine T (not necessarily universal) to compute some given output Y can either be infinite or finite, depending on Y and the details of T. To derive these results we must combine ideas from nonequilibrium statistical physics with fundamental results from computer science, such as Levin's coding theorem and other theorems about universal computation. I would like to ackowledge the Santa Fe Institute, Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation, Grant No. FQXi-RHl3-1349 from the FQXi foundation, and Grant No. CHE-1648973 from the U.S. National Science Foundation.

  2. Ethiopian Wheat Yield and Yield Gap Estimation: A Spatial Small Area Integrated Data Approach

    NASA Astrophysics Data System (ADS)

    Mann, M.; Warner, J.

    2015-12-01

    Despite the collection of routine annual agricultural surveys and significant advances in GIS and remote sensing products, little econometric research has been undertaken in predicting developing nation's agricultural yields. In this paper, we explore the determinants of wheat output per hectare in Ethiopia during the 2011-2013 Meher crop seasons aggregated to the woreda administrative area. Using a panel data approach, combining national agricultural field surveys with relevant GIS and remote sensing products, the model explains nearly 40% of the total variation in wheat output per hectare across the country. The model also identifies specific contributors to wheat yields that include farm management techniques (eg. area planted, improved seed, fertilizer, irrigation), weather (eg. rainfall), water availability (vegetation and moisture deficit indexes) and policy intervention. Our findings suggest that woredas produce between 9.8 and 86.5% of their potential wheat output per hectare given their altitude, weather conditions, terrain, and plant health. At the median, Amhara, Oromiya, SNNP, and Tigray produce 48.6, 51.5, 49.7, and 61.3% of their local attainable yields, respectively. This research has a broad range of applications, especially from a public policy perspective: identifying causes of yield fluctuations, remotely evaluating larger agricultural intervention packages, and analyzing relative yield potential. Overall, the combination of field surveys with spatial data can be used to identify management priorities for improving production at a variety of administrative levels.

  3. Reversibility and stability of information processing systems

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1984-01-01

    Classical and quantum models of dynamically reversible computers are considered. Instabilities in the evolution of the classical 'billiard ball computer' are analyzed and shown to result in a one-bit increase of entropy per step of computation. 'Quantum spin computers', on the other hand, are not only microscopically, but also operationally reversible. Readoff of the output of quantum computation is shown not to interfere with this reversibility. Dissipation, while avoidable in principle, can be used in practice along with redundancy to prevent errors.

  4. The Peace Game: A Data-Driven Evaluation of a Software-Based Model of the Effects of Modern Conflict on Populations

    DTIC Science & Technology

    2015-09-01

    Most of these early computer games were just digital depictions of a board game. The computers were generally used as a giant calculator that helped...limitless. Digital output of “what happened” allows game players and decision makers to collect and analyze every part of the game without transcription...there was a dispute over voter eligibility. Missiriya nomads , who are loyal to Sudan, spend several months of the year in Abyei grazing their cattle

  5. Efficiency and Productivity Analysis of Multidivisional Firms

    NASA Astrophysics Data System (ADS)

    Gong, Binlei

    Multidivisional firms are those who have footprints in multiple segments and hence using multiple technologies to convert inputs to outputs, which makes it difficult to estimate the resource allocations, aggregated production functions, and technical efficiencies of this type of companies. This dissertation aims to explore and reveal such unobserved information by several parametric and semiparametric stochastic frontier analyses and some other structural models. In the empirical study, this dissertation analyzes the productivity and efficiency for firms in the global oilfield market.

  6. Modeling of a Ne/Xe dielectric barrier discharge excilamp for improvement of VUV radiation production

    NASA Astrophysics Data System (ADS)

    Khodja, K.; Belasri, A.; Loukil, H.

    2017-08-01

    This work is devoted to excimer lamp efficiency optimization by using a homogenous discharge model of a dielectric barrier discharge in a Ne-Xe mixture. The model includes the plasma chemistry, electrical circuit, and Boltzmann equation. In this paper, we are particularly interested in the electrical and kinetic properties and light output generated by the DBD. Xenon is chosen for its high luminescence in the range of vacuum UV radiation around 173 nm. Our study is motivated by interest in this type of discharge in many industrial applications, including the achievement of high light output lamps. In this work, we used an applied sinusoidal voltage, frequency, gas pressure, and concentration in the ranges of 2-8 kV, 10-200 kHz, 100-800 Torr, and 10-50%, respectively. The analyzed results concern the voltage V p across the gap, the dielectric voltage V d, the discharge current I, and the particles densities. We also investigated the effect of the electric parameters and xenon concentration on the lamp efficiency. This investigation will allow one to find out the appropriate parameters for Ne/Xe DBD excilamps to improve their efficiency.

  7. World Input-Output Network

    PubMed Central

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389

  8. Analysis of superconducting magnetic energy storage applications at a proposed wind farm site near Browning, Montana

    NASA Astrophysics Data System (ADS)

    Gaustad, K. L.; Desteese, J. G.

    1993-07-01

    A computer program was developed to analyze the viability of integrating superconducting magnetic energy storage (SMES) with proposed wind farm scenarios at a site near Browning, Montana. The program simulated an hour-by-hour account of the charge/discharge history of a SMES unit for a representative wind-speed year. Effects of power output, storage capacity, and power conditioning capability on SMES performance characteristics were analyzed on a seasonal, diurnal, and hourly basis. The SMES unit was assumed to be charged during periods when power output of the wind resource exceeded its average value. Energy was discharged from the SMES unit into the grid during periods of low wind speed to compensate for below-average output of the wind resource. The option of using SMES to provide power continuity for a wind farm supplemented by combustion turbines was also investigated. Levelizing the annual output of large wind energy systems operating in the Blackfeet area of Montana was found to require a storage capacity too large to be economically viable. However, it appears that intermediate-sized SMES economically levelize the wind energy output on a seasonal basis.

  9. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    NASA Astrophysics Data System (ADS)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  10. [Ecological management model of agriculture-pasture ecotone based on the theory of energy and material flow--a case study in Houshan dryland area of Inner Mongolia].

    PubMed

    Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi

    2004-04-01

    The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.

  11. Optimization and Improvement of Test Processes on a Production Line

    NASA Astrophysics Data System (ADS)

    Sujová, Erika; Čierna, Helena

    2018-06-01

    The paper deals with increasing processes efficiency at a production line of cylinder heads of engines in a production company operating in the automotive industry. The goal is to achieve improvement and optimization of test processes on a production line. It analyzes options for improving capacity, availability and productivity of processes of an output test by using modern technology available on the market. We have focused on analysis of operation times before and after optimization of test processes at specific production sections. By analyzing measured results we have determined differences in time before and after improvement of the process. We have determined a coefficient of efficiency OEE and by comparing outputs we have confirmed real improvement of the process of the output test of cylinder heads.

  12. A Spectral Method for Spatial Downscaling

    PubMed Central

    Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.

    2014-01-01

    Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037

  13. Visualization Techniques for Computer Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Justin M; Steed, Chad A; Patton, Robert M

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less

  14. Study on electromagnetic characteristics of the magnetic coupling resonant coil for the wireless power transmission system.

    PubMed

    Wang, Zhongxian; Liu, Yiping; Wei, Yonggeng; Song, Yilin

    2018-01-01

    The resonant coil design is taken as the core technology in the magnetic coupling resonant wireless power transmission system, which achieves energy transmission by the coupling of the resonant coil. This paper studies the effect of the resonant coil on energy transmission and the efficiency of the system. Combining a two-coil with a three-coil system, the optimum design method for the resonant coil is given to propose a novel coil structure. First, the co-simulation methods of Pspice and Maxwell are used. When the coupling coefficient of the resonant coil is different, the relationship between system transmission efficiency, output power, and frequency is analyzed. When the self-inductance of the resonant coil is different, the relationship between the performance and frequency of the system transmission is analyzed. Then, two-coil and three-coil structure models are built, and the parameters of the magnetic field of the coils are calculated and analyzed using the finite element method. In the end, a dual E-type simulation circuit model is used to optimize the design of the novel resonance coil. The co-simulation results show that the coupling coefficients of the two-coil, three-coil, and novel coil systems are 0.017, 0.17 and 0.0126, respectively. The power loss of the novel coil is 16.4 mW. There is an obvious improvement in the three-coil system, which shows that the magnetic leakage of the field and the energy coupling are relatively small. The new structure coil has better performance, and the load loss is lower; it can improve the system output power and transmission efficiency.

  15. Investigation of Hill's optical turbulence model by means of direct numerical simulation.

    PubMed

    Muschinski, Andreas; de Bruyn Kops, Stephen M

    2015-12-01

    For almost four decades, Hill's "Model 4" [J. Fluid Mech. 88, 541 (1978) has played a central role in research and technology of optical turbulence. Based on Batchelor's generalized Obukhov-Corrsin theory of scalar turbulence, Hill's model predicts the dimensionless function h(κl(0), Pr) that appears in Tatarskii's well-known equation for the 3D refractive-index spectrum in the case of homogeneous and isotropic turbulence, Φn(κ)=0.033C2(n)κ(-11/3) h(κl(0), Pr). Here we investigate Hill's model by comparing numerical solutions of Hill's differential equation with scalar spectra estimated from direct numerical simulation (DNS) output data. Our DNS solves the Navier-Stokes equation for the 3D velocity field and the transport equation for the scalar field on a numerical grid containing 4096(3) grid points. Two independent DNS runs are analyzed: one with the Prandtl number Pr=0.7 and a second run with Pr=1.0 . We find very good agreement between h(κl(0), Pr) estimated from the DNS output data and h(κl(0), Pr) predicted by the Hill model. We find that the height of the Hill bump is 1.79 Pr(1/3), implying that there is no bump if Pr<0.17 . Both the DNS and the Hill model predict that the viscous-diffusive "tail" of h(κl(0), Pr) is exponential, not Gaussian.

  16. Productivity growth in outpatient child and adolescent mental health services: the impact of case-mix adjustment.

    PubMed

    Halsteinli, Vidar; Kittelsen, Sverre A; Magnussen, Jon

    2010-02-01

    The performance of health service providers may be monitored by measuring productivity. However, the policy value of such measures may depend crucially on the accuracy of input and output measures. In particular, an important question is how to adjust adequately for case-mix in the production of health care. In this study, we assess productivity growth in Norwegian outpatient child and adolescent mental health service units (CAMHS) over a period characterized by governmental utilization of simple productivity indices, a substantial increase in capacity and a concurrent change in case-mix. We analyze the sensitivity of the productivity growth estimates using different specifications of output to adjust for case-mix differences. Case-mix adjustment is achieved by distributing patients into eight groups depending on reason for referral, age and gender, as well as correcting for the number of consultations. We utilize the nonparametric Data Envelopment Analysis (DEA) method to implicitly calculate weights that maximize each unit's efficiency. Malmquist indices of technical productivity growth are estimated and bootstrap procedures are performed to calculate confidence intervals and to test alternative specifications of outputs. The dataset consist of an unbalanced panel of 48-60 CAMHS in the period 1998-2006. The mean productivity growth estimate from a simple unadjusted patient model (one single output) is 35%; adjusting for case-mix (eight outputs) reduces the growth estimate to 15%. Adding consultations increases the estimate to 28%. The latter reflects an increase in number of consultations per patient. We find that the governmental productivity indices strongly tend to overestimate productivity growth. Case-mix adjustment is of major importance and governmental utilization of performance indicators necessitates careful considerations of output specifications. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. The output voltage model and experiment of magnetostrictive displacement sensor based on Weidemann effect

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng

    2018-05-01

    Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.

  18. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  19. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  20. Global identifiability of linear compartmental models--a computer algebra algorithm.

    PubMed

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  1. Input-output relationship in social communications characterized by spike train analysis

    NASA Astrophysics Data System (ADS)

    Aoki, Takaaki; Takaguchi, Taro; Kobayashi, Ryota; Lambiotte, Renaud

    2016-10-01

    We study the dynamical properties of human communication through different channels, i.e., short messages, phone calls, and emails, adopting techniques from neuronal spike train analysis in order to characterize the temporal fluctuations of successive interevent times. We first measure the so-called local variation (LV) of incoming and outgoing event sequences of users and find that these in- and out-LV values are positively correlated for short messages and uncorrelated for phone calls and emails. Second, we analyze the response-time distribution after receiving a message to focus on the input-output relationship in each of these channels. We find that the time scales and amplitudes of response differ between the three channels. To understand the effects of the response-time distribution on the correlations between the LV values, we develop a point process model whose activity rate is modulated by incoming and outgoing events. Numerical simulations of the model indicate that a quick response to incoming events and a refractory effect after outgoing events are key factors to reproduce the positive LV correlations.

  2. Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection

    PubMed Central

    Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-01-01

    Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system. PMID:26343662

  3. Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection.

    PubMed

    Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-08-28

    Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.

  4. A Novel Nonlinear Piezoelectric Energy Harvesting System Based on Linear-Element Coupling: Design, Modeling and Dynamic Analysis.

    PubMed

    Zhou, Shengxi; Yan, Bo; Inman, Daniel J

    2018-05-09

    This paper presents a novel nonlinear piezoelectric energy harvesting system which consists of linear piezoelectric energy harvesters connected by linear springs. In principle, the presented nonlinear system can improve broadband energy harvesting efficiency where magnets are forbidden. The linear spring inevitably produces the nonlinear spring force on the connected harvesters, because of the geometrical relationship and the time-varying relative displacement between two adjacent harvesters. Therefore, the presented nonlinear system has strong nonlinear characteristics. A theoretical model of the presented nonlinear system is deduced, based on Euler-Bernoulli beam theory, Kirchhoff’s law, piezoelectric theory and the relevant geometrical relationship. The energy harvesting enhancement of the presented nonlinear system (when n = 2, 3) is numerically verified by comparing with its linear counterparts. In the case study, the output power area of the presented nonlinear system with two and three energy harvesters is 268.8% and 339.8% of their linear counterparts, respectively. In addition, the nonlinear dynamic response characteristics are analyzed via bifurcation diagrams, Poincare maps of the phase trajectory, and the spectrum of the output voltage.

  5. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  6. Why an SO/sub 2/ emission tax is an unpopular policy instrument: Simulation results from a general equilibrium model of the Norwegian economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, D.A.; Alfsen, K.H.

    1986-01-01

    Norway, together with some twenty other countries, signed the Helsinki treaty in July 1985 for the purpose of reducing SO/sub 2/ emissions. Hence, it is interesting to analyze the emission reductions that could be achieved using a tax on SO/sub 2/ emissions, as well as the indirect impacts on the economy. Simulations of the economic impact of the tax (which effectively increases the cost of using energy) were made using the Multi-Sectoral Growth (MSG) model. Results of the simulations indicated a larger than expected reduction in economic output.

  7. Cycle of a closed gas-turbine plant with a gas-dynamic energy-separation device

    NASA Astrophysics Data System (ADS)

    Leontiev, A. I.; Burtsev, S. A.

    2017-09-01

    The efficiency of closed gas-turbine space-based plants is analyzed. The weight-size characteristics of closed gas-turbine plants are shown in many respects as determined by the refrigerator-radiator parameters. The scheme of closed gas-turbine plants with a gas-dynamic temperature-stratification device is proposed, and a calculation model is developed. This model shows that the cycle efficiency decreases by 2% in comparison with that of the closed gas-turbine plants operating by the traditional scheme with increasing temperature at the output from the refrigerator-radiator by 28 K and decreasing its area by 13.7%.

  8. Impedance Analysis of Ion Transport Through Supported Lipid Membranes Doped with Ionophores: A New Kinetic Approach

    PubMed Central

    Alvarez, P. E.; Vallejo, A. E.

    2008-01-01

    Kinetics of facilitated ion transport through planar bilayer membranes are normally analyzed by electrical conductance methods. The additional use of electrical relaxation techniques, such as voltage jump, is necessary to evaluate individual rate constants. Although electrochemical impedance spectroscopy is recognized as the most powerful of the available electric relaxation techniques, it has rarely been used in connection with these kinetic studies. According to the new approach presented in this work, three steps were followed. First, a kinetic model was proposed that has the distinct quality of being general, i.e., it properly describes both carrier and channel mechanisms of ion transport. Second, the state equations for steady-state and for impedance experiments were derived, exhibiting the input–output representation pertaining to the model’s structure. With the application of a method based on the similarity transformation approach, it was possible to check that the proposed mechanism is distinguishable, i.e., no other model with a different structure exhibits the same input–output behavior for any input as the original. Additionally, the method allowed us to check whether the proposed model is globally identifiable (i.e., whether there is a single set of fit parameters for the model) when analyzed in terms of its impedance response. Thus, our model does not represent a theoretical interpretation of the experimental impedance but rather constitutes the prerequisite to select this type of experiment in order to obtain optimal kinetic identification of the system. Finally, impedance measurements were performed and the results were fitted to the proposed theoretical model in order to obtain the kinetic parameters of the system. The successful application of this approach is exemplified with results obtained for valinomycin–K+ in lipid bilayers supported onto gold substrates, i.e., an arrangement capable of emulating biological membranes. PMID:19669528

  9. Collection Building for Interdisciplinary Research: An Analysis of Input/Output Factors.

    ERIC Educational Resources Information Center

    Wilson, Myoung Chung; Edelman, Hendrik

    Collection development and management in academic libraries continue to present a considerable challenge, especially in interdisciplinary fields. In order to ascertain patterns of interdisciplinary research, including the patterns of demand for bibliographic resources, this study analyzes the input/output factors that are related to the research…

  10. From Zero to Sixty: Calibrating Real-Time Responses

    ERIC Educational Resources Information Center

    Koulis, Theodoro; Ramsay, James O.; Levitin, Daniel J.

    2008-01-01

    Recent advances in data recording technology have given researchers new ways of collecting on-line and continuous data for analyzing input-output systems. For example, continuous response digital interfaces are increasingly used in psychophysics. The statistical problem related to these input-output systems reduces to linking time-varying…

  11. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    USGS Publications Warehouse

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  12. A Current-Mode Common-Mode Feedback Circuit (CMFB) with Rail-to-Rail Operation

    NASA Astrophysics Data System (ADS)

    Suadet, Apirak; Kasemsuwan, Varakorn

    2011-03-01

    This paper presents a current-mode common-mode feedback (CMFB) circuit with rail-to-rail operation. The CMFB is a stand-alone circuit, which can be connected to any low voltage transconductor without changing or upsetting the existing circuit. The proposed CMFB employs current mirrors, operating as common-mode detector and current amplifier to enhance the loop gain of the CMFB. The circuit employs positive feedback to enhance the output impedance and gain. The circuit has been designed using a 0.18 μm CMOS technology under 1V supply and analyzed using HSPICE with BSIM3V3 device models. A pseudo-differential amplifier using two common sources and the proposed CMFB shows rail to rail output swing (± 0.7 V) with low common-mode gain (-36 dB) and power dissipation of 390 μW.

  13. Measuring the impact of final demand on global production system based on Markov process

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Guan, Jun; Wu, Shan

    2018-07-01

    Input-output table is a comprehensive and detailed in describing the national economic systems, consisting of supply and demand information among various industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can depict the structural properties of social and economic systems, and reveal the complicated relationships between the inner hierarchies and the external macroeconomic functions. This paper tried to measure the globalization degree of industrial sectors on the global value chain. Firstly, it constructed inter-country input-output network models to reproduce the topological structure of global economic system. Secondly, it regarded the propagation of intermediate goods on the global value chain as Markov process and introduced counting first passage betweenness to quantify the added processing amount when globally final demand stimulates this production system. Thirdly, it analyzed the features of globalization at both global and country-sector level

  14. Equivalent circuit and optimum design of a multilayer laminated piezoelectric transformer.

    PubMed

    Dong, Shuxiang; Carazo, Alfredo Vazquez; Park, Seung Ho

    2011-12-01

    A multilayer laminated piezoelectric Pb(Zr(1-x)Ti(x))O(3) (PZT) ceramic transformer, operating in a half- wavelength longitudinal resonant mode (λ/2 mode), has been analyzed. This piezoelectric transformer is composed of one thickness-polarized section (T-section) for exciting the longitudinal mechanical vibrations, two longitudinally polarized sections (L-section) for generating high-voltage output, and two insulating layers laminated between the T-section and L-section layers to provide insulation between the input and output sections. Based on the piezoelectric constitutive and motion equations, an electro-elasto-electric (EEE) equivalent circuit has been developed, and correspondingly, an effective EEE coupling coefficient was proposed for optimum design of this multilayer transformer. Commercial finite element analysis software is used to determine the validity of the developed equivalent circuit. Finally, a prototype sample was manufactured and experimental data was collected to verify the model's validity.

  15. Developing Emergency Room Key Performance Indicators: What to Measure and Why Should We Measure It?

    PubMed

    Khalifa, Mohamed; Zabani, Ibrahim

    2016-01-01

    Emergency Room (ER) performance has been a timely topic for both healthcare practitioners and researchers. King Faisal Specialist Hospital and Research Center, Saudi Arabia worked on developing a comprehensive set of KPIs to monitor, evaluate and improve the performance of the ER. A combined approach using quantitative and qualitative methods was used to collect and analyze the data. 34 KPIs were developed and sorted into the three components of the ER patient flow model; input, throughput and output. Input indicators included number and acuity of ER patients, patients leaving without being seen and revisit rates. Throughput indicators included number of active ER beds, ratio of ER patients to ER staff and the length of stay including waiting time and treatment time. The turnaround time of supportive services, such as lab, radiology and medications, were also included. Output indicators include boarding time and available hospital beds, ICU beds and patients waiting for admission.

  16. Predicting the synaptic information efficacy in cortical layer 5 pyramidal neurons using a minimal integrate-and-fire model.

    PubMed

    London, Michael; Larkum, Matthew E; Häusser, Michael

    2008-11-01

    Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.

  17. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  18. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  19. Thermo Scientific Sulfur Dioxide Analyzer Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springston, S. R.

    The Sulfur Dioxide Analyzer measures sulfur dioxide based on absorbance of UV light at one wavelength by SO2 molecules which then decay to a lower energy state by emitting UV light at a longer wavelength. Specifically, SO2 + hυ1 →SO2 *→SO2 + hυ2 The emitted light is proportional to the concentration of SO2 in the optical cell. External communication with the analyzer is available through an Ethernet port configured through the instrument network of the AOS systems. The Model 43i-TLE is part of the i-series of Thermo Scientific instruments. The i-series instruments are designed to interface with external computers throughmore » the proprietary Thermo Scientific iPort Software. However, this software is somewhat cumbersome and inflexible. BNL has written an interface program in National Instruments LabView that both controls the Model 43i-TLE Analyzer AND queries the unit for all measurement and housekeeping data. The LabView vi (the software program written by BNL) ingests all raw data from the instrument and outputs raw data files in a uniform data format similar to other instruments in the AOS and described more fully in Section 6.0 below.« less

  20. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  1. Predicting High-Power Performance in Professional Cyclists.

    PubMed

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  2. Finite element method simulating temperature distribution in skin induced by 980-nm pulsed laser based on pain stimulation.

    PubMed

    Wang, Han; Dong, Xiao-Xi; Yang, Ji-Chun; Huang, He; Li, Ying-Xin; Zhang, Hai-Xia

    2017-07-01

    For predicting the temperature distribution within skin tissue in 980-nm laser-evoked potentials (LEPs) experiments, a five-layer finite element model (FEM-5) was constructed based on Pennes bio-heat conduction equation and the Lambert-Beer law. The prediction results of the FEM-5 model were verified by ex vivo pig skin and in vivo rat experiments. Thirty ex vivo pig skin samples were used to verify the temperature distribution predicted by the model. The output energy of the laser was 1.8, 3, and 4.4 J. The laser spot radius was 1 mm. The experiment time was 30 s. The laser stimulated the surface of the ex vivo pig skin beginning at 10 s and lasted for 40 ms. A thermocouple thermometer was used to measure the temperature of the surface and internal layers of the ex vivo pig skin, and the sampling frequency was set to 60 Hz. For the in vivo experiments, nine adult male Wistar rats weighing 180 ± 10 g were used to verify the prediction results of the model by tail-flick latency. The output energy of the laser was 1.4 and 2.08 J. The pulsed width was 40 ms. The laser spot radius was 1 mm. The Pearson product-moment correlation and Kruskal-Wallis test were used to analyze the correlation and the difference of data. The results of all experiments showed that the measured and predicted data had no significant difference (P > 0.05) and good correlation (r > 0.9). The safe laser output energy range (1.8-3 J) was also predicted. Using the FEM-5 model prediction, the effective pain depth could be accurately controlled, and the nociceptors could be selectively activated. The FEM-5 model can be extended to guide experimental research and clinical applications for humans.

  3. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pireddu, Marina, E-mail: marina.pireddu@unimib.it

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizesmore » its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called “Stretching Along the Paths” technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.« less

  4. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.

  5. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  6. Steady-state kinetic analysis of triacylglycerol delivery into mesenteric lymph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansbach, C.M. II; Arnold, A.

    1986-08-01

    The output of triacylglycerol in chylomicrons can be increased 60% by prefeeding rats with a 20% fat diet or 110% by including phosphatidylcholine in a lipid infusion. The present study was designed to determine whether the increment was due to an expansion of the chylomicron triacylglycerol precursor pool or an increase in its fractional turnover rate. A steady-state kinetic model was established in rats receiving 135 mol glyceryl trioleate/h. The decay in specific activity of triacylglycerol after removal of radiolabeled glyceryl trioleate from the duodenal infusate was followed for 4 h and analyzed by the SAAM 23 program. It wasmore » found that the fractional turnover rate of the chylomicron precursor pool remained the same in each experimental condition. However, the pool was found to expand in direct proportion to the chylomicron triacylglycerol output. Functionally the infused (TH)glyceride-glycerol and tri( UC)oleate behaved the same in lymph chylomicrons and was 90% of infusate specific activity. In summary, these data suggest that increases in chylomicron triacylglycerol output are dependent on the size of the mucosal precursor pool and the monoacylglycerol acyltransferase synthetic pathway for its triacylglycerol.« less

  7. An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor

    PubMed Central

    Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui

    2017-01-01

    In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953

  8. Numerical analysis of 2.7 μm lasing in Er3+-doped tellurite fiber lasers

    PubMed Central

    Wang, Weichao; Li, Lixiu; Chen, Dongdan; Zhang, Qinyuan

    2016-01-01

    The laser performance of Er3+-doped tellurite fiber lasers operating at 2.7 μm due to 4I11/2 → 4I13/2 transition has been theoretically studied by using rate equations and propagation equations. The effects of pumping configuration and fiber length on the output power, slope efficiency, threshold, and intracavity pump and laser power distributions have been systematically investigated to optimize the performance of fiber lasers. When the pump power is 20 W, the maximum slope efficiency (27.62%), maximum output power (5.219 W), and minimum threshold (278.90 mW) are predicted with different fiber lengths (0.05–5 m) under three pumping configurations. It is also found that reasonable output power is expected for fiber loss below 2 dB/ m. The numerical modeling on the two- and three-dimensional laser field distributions are further analyzed to reveal the characteristics of this multimode step-index tellurite fiber. Preliminary simulation results show that this Er3+-doped tellurite fiber is an excellent alternative to conventional fluoride fiber for developing efficient 2.7 μm fiber lasers. PMID:27545663

  9. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  10. A comparison of model short-range forecasts and the ARM Microbase data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hnilo, J J

    2006-09-22

    For the fourth quarter ARM metric we will make use of new liquid water data that has become available, and called the 'Microbase' value added product (referred to as OBS, within the text) at three sites: the North Slope of Alaska (NSA), Tropical West Pacific (TWP) and the Southern Great Plains (SGP) and compare these observations to model forecast data. Two time periods will be analyzed March 2000 for the SGP and October 2004 for both TWP and NSA. The Microbase data have been averaged to 35 pressure levels (e.g., from 1000hPa to 100hPa at 25hPa increments) and time averagedmore » to 3hourly data for direct comparison to our model output.« less

  11. Hybrid Wing Body Planform Design with Vehicle Sketch Pad

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Olson, Erik D.

    2011-01-01

    The objective of this paper was to provide an update on NASA s current tools for design and analysis of hybrid wing body (HWB) aircraft with an emphasis on Vehicle Sketch Pad (VSP). NASA started HWB analysis using the Flight Optimization System (FLOPS). That capability is enhanced using Phoenix Integration's ModelCenter(Registered TradeMark). Model Center enables multifidelity analysis tools to be linked as an integrated structure. Two major components are linked to FLOPS as an example; a planform discretization tool and VSP. The planform discretization tool ensures the planform is smooth and continuous. VSP is used to display the output geometry. This example shows that a smooth & continuous HWB planform can be displayed as a three-dimensional model and rapidly sized and analyzed.

  12. Hydrodynamic modeling of hydrologic surface connectivity within a coastal river-floodplain system

    NASA Astrophysics Data System (ADS)

    Castillo, C. R.; Guneralp, I.

    2017-12-01

    Hydrologic surface connectivity (HSC) within river-floodplain environments is a useful indicator of the overall health of riparian habitats because it allows connections amongst components/landforms of the riverine landscape system to be quantified. Overbank flows have traditionally been the focus for analyses concerned with river-floodplain connectivity, but recent works have identified the large significance from sub-bankfull streamflows. Through the use of morphometric analysis and a digital elevation model that is relative to the river water surface, we previously determined that >50% of the floodplain for Mission River on the Coastal Bend of Texas becomes connected to the river at streamflows well-below bankfull conditions. Guided by streamflow records, field-based inundation data, and morphometric analysis; we develop a two-dimensional hydrodynamic model for lower portions of Mission River Floodplain system. This model not only allows us to analyze connections induced by surface water inundation, but also other aspects of the hydrologic connectivity concept such as exchanges of sediment and energy between the river and its floodplain. We also aggregate hydrodynamic model outputs to an object/landform level in order to analyze HSC and associated attributes using measures from graph/network theory. Combining physically-based hydrodynamic models with object-based and graph theoretical analyses allow river-floodplain connectivity to be quantified in a consistent manner with measures/indicators commonly used in landscape analysis. Analyzes similar to ours build towards the establishment of a formal framework for analyzing river-floodplain interaction that will ultimately serve to inform the management of riverine/floodplain environments.

  13. Economies of Scope in Distance Education: The Case of Chinese Research Universities

    ERIC Educational Resources Information Center

    Li, Fengliang; Chen, Xinlei

    2012-01-01

    With the rapid development of information technologies, distance education has become "another form of product differentiation in the output mix produced by the multi-product university or college" (Cohn & Cooper, 2004, p. 607). This article aims at analyzing the economies of scope of distance education (as an educational output) in…

  14. Network Capacity Assessment of CHP-based Distributed Generation on Urban Energy Distribution Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xianjun

    The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy market, considered to be an effective solution to promote energy efficiency. In the urban environment, the electricity, water and natural gas distribution networks are becoming increasingly interconnected with the growing penetration of the CHP-based DG. Subsequently, this emerging interdependence leads to new topics meriting serious consideration: how much of the CHP-based DG can be accommodated and where to locate these DERs, and given preexisting constraints, how to quantify the mutual impacts on operation performances between these urban energy distribution networks and the CHP-based DG. The early research work was conducted to investigate the feasibility and design methods for one residential microgrid system based on existing electricity, water and gas infrastructures of a residential community, mainly focusing on the economic planning. However, this proposed design method cannot determine the optimal DG sizing and siting for a larger test bed with the given information of energy infrastructures. In this context, a more systematic as well as generalized approach should be developed to solve these problems. In the later study, the model architecture that integrates urban electricity, water and gas distribution networks, and the CHP-based DG system was developed. The proposed approach addressed the challenge of identifying the optimal sizing and siting of the CHP-based DG on these urban energy networks and the mutual impacts on operation performances were also quantified. For this study, the overall objective is to maximize the electrical output and recovered thermal output of the CHP-based DG units. The electricity, gas, and water system models were developed individually and coupled by the developed CHP-based DG system model. The resultant integrated system model is used to constrain the DG's electrical output and recovered thermal output, which are affected by multiple factors and thus analyzed in different case studies. The results indicate that the designed typical gas system is capable of supplying sufficient natural gas for the DG normal operation, while the present water system cannot support the complete recovery of the exhaust heat from the DG units.

  15. PULSE HEIGHT ANALYZER

    DOEpatents

    Johnstone, C.W.

    1958-01-21

    An anticoincidence device is described for a pair of adjacent channels of a multi-channel pulse height analyzer for preventing the lower channel from generating a count pulse in response to an input pulse when the input pulse has sufficient magnitude to reach the upper level channel. The anticoincidence circuit comprises a window amplifier, upper and lower level discriminators, and a biased-off amplifier. The output of the window amplifier is coupled to the inputs of the discriminators, the output of the upper level discriminator is connected to the resistance end of a series R-C network, the output of the lower level discriminator is coupled to the capacitance end of the R-C network, and the grid of the biased-off amplifier is coupled to the junction of the R-C network. In operation each discriminator produces a negative pulse output when the input pulse traverses its voltage setting. As a result of the connections to the R-C network, a trigger pulse will be sent to the biased-off amplifier when the incoming pulse level is sufficient to trigger only the lower level discriminator.

  16. Transport coefficient computation based on input/output reduced order models

    NASA Astrophysics Data System (ADS)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.

  17. PICARD - A PIpeline for Combining and Analyzing Reduced Data

    NASA Astrophysics Data System (ADS)

    Gibb, Andrew G.; Jenness, Tim; Economou, Frossie

    PICARD is a facility for combining and analyzing reduced data, normally the output from the ORAC-DR data reduction pipeline. This document describes an introduction to using PICARD for processing instrument-independent data.

  18. Output power distributions of mobile radio base stations based on network measurements

    NASA Astrophysics Data System (ADS)

    Colombi, D.; Thors, B.; Persson, T.; Wirén, N.; Larsson, L.-E.; Törnevik, C.

    2013-04-01

    In this work output power distributions of mobile radio base stations have been analyzed for 2G and 3G telecommunication systems. The approach is based on measurements in selected networks using performance surveillance tools part of the network Operational Support System (OSS). For the 3G network considered, direct measurements of output power levels were possible, while for the 2G networks, output power levels were estimated from measurements of traffic volumes. Both voice and data services were included in the investigation. Measurements were conducted for large geographical areas, to ensure good overall statistics, as well as for smaller areas to investigate the impact of different environments. For high traffic hours, the 90th percentile of the averaged output power was found to be below 65% and 45% of the available output power for the 2G and 3G systems, respectively.

  19. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  20. Numerical evaluation of implantable hearing devices using a finite element model of human ear considering viscoelastic properties.

    PubMed

    Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi

    2016-08-01

    Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices. © IMechE 2016.

  1. Universal MOSFET parameter analyzer

    NASA Astrophysics Data System (ADS)

    Klekachev, A. V.; Kuznetsov, S. N.; Pikulev, V. B.; Gurtov, V. A.

    2006-05-01

    MOSFET analyzer is developed to extract most important parameters of transistors. Instead of routine DC transfer and output characteristics, analyzer provides an evaluation of interface states density by applying charge pumping technique. There are two features that outperform the analyzer among similar products of other vendors. It is compact (100 × 80 × 50 mm 3 in dimensions) and lightweight (< 200 gram) instrument with ultra low power supply (< 2.5 W). The analyzer operates under control of IBM PC by means of USB interface that simultaneously provides power supply. Owing to the USB-compatible microcontroller as the basic element, designed analyzer offers cost-effective solution for diverse applications. The enclosed software runs under Windows 98/2000/XP operating systems, it has convenient graphical interface simplifying measurements for untrained user. Operational characteristics of analyzer are as follows: gate and drain output voltage within limits of +/-10V measuring current range of 1pA ÷ 10 mA; lowest limit of interface states density characterization of ~10 9 cm -2 • eV -1. The instrument was designed on the base of component parts from CYPRESS and ANALOG DEVICES (USA).

  2. Orbital Maneuvering Engine Feed System Coupled Stability Investigation, Computer User's Manual

    NASA Technical Reports Server (NTRS)

    Schuman, M. D.; Fertig, K. W.; Hunting, J. K.; Kahn, D. R.

    1975-01-01

    An operating manual for the feed system coupled stability model was given, in partial fulfillment of a program designed to develop, verify, and document a digital computer model that can be used to analyze and predict engine/feed system coupled instabilities in pressure-fed storable propellant propulsion systems over a frequency range of 10 to 1,000 Hz. The first section describes the analytical approach to modelling the feed system hydrodynamics, combustion dynamics, chamber dynamics, and overall engineering model structure, and presents the governing equations in each of the technical areas. This is followed by the program user's guide, which is a complete description of the structure and operation of the computerized model. Last, appendices provide an alphabetized FORTRAN symbol table, detailed program logic diagrams, computer code listings, and sample case input and output data listings.

  3. Procedures for generation and reduction of linear models of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Cwynar, D. S.

    1978-01-01

    A real time hybrid simulation of the Pratt & Whitney F100-PW-F100 turbofan engine was used for linear-model generation. The linear models were used to analyze the effect of disturbances about an operating point on the dynamic performance of the engine. A procedure that disturbs, samples, and records the state and control variables was developed. For large systems, such as the F100 engine, the state vector is large and may contain high-frequency information not required for control. This, reducing the full-state to a reduced-order model may be a practicable approach to simplifying the control design. A reduction technique was developed to generate reduced-order models. Selected linear and nonlinear output responses to exhaust-nozzle area and main-burner fuel flow disturbances are presented for comparison.

  4. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  5. Frontal Representation as a Metric of Model Performance

    NASA Astrophysics Data System (ADS)

    Douglass, E.; Mask, A. C.

    2017-12-01

    Representation of fronts detected by altimetry are used to evaluate the performance of the HYCOM global operational product. Fronts are detected and assessed in daily alongtrack altimetry. Then, modeled sea surface height is interpolated to the locations of the alongtrack observations, and the same frontal detection algorithm is applied to the interpolated model output. The percentage of fronts found in the altimetry and replicated in the model gives a score (0-100) that assesses the model's ability to replicate fronts in the proper location with the proper orientation. Further information can be obtained from determining the number of "extra" fronts found in the model but not in the altimetry, and from assessing the horizontal and vertical dimensions of the front in the model as compared to observations. Finally, the sensitivity of this metric to choices regarding the smoothing of noisy alongtrack altimetry observations, and to the minimum size of fronts being analyzed, is assessed.

  6. Titan I propulsion system modeling and possible performance improvements

    NASA Astrophysics Data System (ADS)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  7. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  8. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.

  9. Vulnerability assessment and risk level of ecosystem services for climate change impacts and adaptation in the High-Atlas mountain of Morocco

    NASA Astrophysics Data System (ADS)

    Messouli, Mohammed; Bounoua, Lahouari; Babqiqi, Abdelaziz; Ben Salem, Abdelkrim; Yacoubi-Khebiza, Mohammed

    2010-05-01

    Moroccan mountain biomes are considered endangered due to climate change that affects directly or indirectly different key features (biodiversity, snow cover, run-off processes, and water availability). The present article describes the strategy for achieving collaboration between natural and social scientists, stakeholders, decision-makers, and other societal groups, in order to carry out an integrated assessment of climate change in the High-Atlas Mountains of Morocco, with an emphasis on vulnerability and adaptation. We will use a robust statistical technique to dynamically downscale outputs from the IPCC climates models to the regional study area. Statistical downscaling provides a powerful method for deriving local-to-regional scale information on climate variables from large-scale climate model outputs. The SDSM will be used to produce the high resolution climate change scenarios from climate model outputs at low resolution. These data will be combined with socio-economic attributes such as the amount of water used for irrigation of agricultural lands, agricultural practices and phenology, cost of water delivery and non-market values of produced goods and services. This study, also analyzed spatial and temporal in land use/land cover changes (LUCC) in a typical watershed covering an area of 203 km2 by comparing classified satellite images from 1976, 1989 and 2000 coupled by GIS analyses and also investigated changes in the shape of land use patches over the period. The GIS-platform, which compiles gridded spatial and temporal information of environmental, socio-economic and biophysical data is used to map vulnerability assessment and risk levels over a wide region of Southern High-Atlas. For each scenario, we will derive and analyze near future (10-15 years) key climate indicators strongly related to sustainable management of ecosystem goods and services. Forest cover declined at an average rate of 0.35 ha per year due to timber extraction, cultivation, grazing, and urbanization processes. Historically, cultivation has resulted in such a high loss of plant communities in lowlands that regional diversity has been threatened. Grazing has increased due to low labor costs and economic policies that provide incentives for cattle production in Morocco. Finally to address the interaction among ecosystem services principles, we will use the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) recently developed by the Natural Capital Project. The "Tier 1" modes are theoretically grounded but simple, and are designed for areas where few data are available. The most useful applications of the simple, Tier 1 models are to identify areas of high and low ecosystem service production and biodiversity across the Mountain and illuminate the tradeoffs and synergies among services under current or future conditions. While some Tier 1 models give outputs in absolute terms, others return relative indices of importance.

  10. Tidal Turbine Array Optimization Based on the Discrete Particle Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Guo-wei; Wu, He; Wang, Xiao-yong; Zhou, Qing-wei; Liu, Xiao-man

    2018-06-01

    In consideration of the resource wasted by unreasonable layout scheme of tidal current turbines, which would influence the ratio of cost and power output, particle swarm optimization algorithm is introduced and improved in the paper. In order to solve the problem of optimal array of tidal turbines, the discrete particle swarm optimization (DPSO) algorithm has been performed by re-defining the updating strategies of particles' velocity and position. This paper analyzes the optimization problem of micrositing of tidal current turbines by adjusting each turbine's position, where the maximum value of total electric power is obtained at the maximum speed in the flood tide and ebb tide. Firstly, the best installed turbine number is generated by maximizing the output energy in the given tidal farm by the Farm/Flux and empirical method. Secondly, considering the wake effect, the reasonable distance between turbines, and the tidal velocities influencing factors in the tidal farm, Jensen wake model and elliptic distribution model are selected for the turbines' total generating capacity calculation at the maximum speed in the flood tide and ebb tide. Finally, the total generating capacity, regarded as objective function, is calculated in the final simulation, thus the DPSO could guide the individuals to the feasible area and optimal position. The results have been concluded that the optimization algorithm, which increased 6.19% more recourse output than experience method, can be thought as a good tool for engineering design of tidal energy demonstration.

  11. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-03

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  13. Rescue of Impaired mGluR5-Driven Endocannabinoid Signaling Restores Prefrontal Cortical Output to Inhibit Pain in Arthritic Rats.

    PubMed

    Kiritoshi, Takaki; Ji, Guangchen; Neugebauer, Volker

    2016-01-20

    The medial prefrontal cortex (mPFC) serves executive functions that are impaired in neuropsychiatric disorders and pain. Underlying mechanisms remain to be determined. Here we advance the novel concept that metabotropic glutamate receptor 5 (mGluR5) fails to engage endocannabinoid (2-AG) signaling to overcome abnormal synaptic inhibition in pain, but restoring endocannabinoid signaling allows mGluR5 to increase mPFC output hence inhibit pain behaviors and mitigate cognitive deficits. Whole-cell patch-clamp recordings were made from layer V pyramidal cells in the infralimbic mPFC in rat brain slices. Electrical and optogenetic stimulations were used to analyze amygdala-driven mPFC activity. A selective mGluR5 activator (VU0360172) increased pyramidal output through an endocannabinoid-dependent mechanism because intracellular inhibition of the major 2-AG synthesizing enzyme diacylglycerol lipase or blockade of CB1 receptors abolished the facilitatory effect of VU0360172. In an arthritis pain model mGluR5 activation failed to overcome abnormal synaptic inhibition and increase pyramidal output. mGluR5 function was rescued by restoring 2-AG-CB1 signaling with a CB1 agonist (ACEA) or inhibitors of postsynaptic 2-AG hydrolyzing enzyme ABHD6 (intracellular WWL70) and monoacylglycerol lipase MGL (JZL184) or by blocking GABAergic inhibition with intracellular picrotoxin. CB1-mediated depolarization-induced suppression of synaptic inhibition (DSI) was also impaired in the pain model but could be restored by coapplication of VU0360172 and ACEA. Stereotaxic coadministration of VU0360172 and ACEA into the infralimbic, but not anterior cingulate, cortex mitigated decision-making deficits and pain behaviors of arthritic animals. The results suggest that rescue of impaired endocannabinoid-dependent mGluR5 function in the mPFC can restore mPFC output and cognitive functions and inhibit pain. Significance statement: Dysfunctions in prefrontal cortical interactions with subcortical brain regions, such as the amygdala, are emerging as important players in neuropsychiatric disorders and pain. This study identifies a novel mechanism and rescue strategy for impaired medial prefrontal cortical function in an animal model of arthritis pain. Specifically, an integrative approach of optogenetics, pharmacology, electrophysiology, and behavior is used to advance the novel concept that a breakdown of metabotropic glutamate receptor subtype mGluR5 and endocannabinoid signaling in infralimbic pyramidal cells fails to control abnormal amygdala-driven synaptic inhibition in the arthritis pain model. Restoring endocannabinoid signaling allows mGluR5 activation to increase infralimbic output hence inhibit pain behaviors and mitigate pain-related cognitive deficits. Copyright © 2016 the authors 0270-6474/16/360837-14$15.00/0.

  14. Identifying human influences on atmospheric temperature

    PubMed Central

    Santer, Benjamin D.; Painter, Jeffrey F.; Mears, Carl A.; Doutriaux, Charles; Caldwell, Peter; Arblaster, Julie M.; Cameron-Smith, Philip J.; Gillett, Nathan P.; Gleckler, Peter J.; Lanzante, John; Perlwitz, Judith; Solomon, Susan; Stott, Peter A.; Taylor, Karl E.; Terray, Laurent; Thorne, Peter W.; Wehner, Michael F.; Wentz, Frank J.; Wigley, Tom M. L.; Wilcox, Laura J.; Zou, Cheng-Zhi

    2013-01-01

    We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the fingerprint) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing. PMID:23197824

  15. Multimodel simulations of forest harvesting effects on long‐term productivity and CN cycling in aspen forests.

    PubMed

    Wang, Fugui; Mladenoff, David J; Forrester, Jodi A; Blanco, Juan A; Schelle, Robert M; Peckham, Scott D; Keough, Cindy; Lucash, Melissa S; Gower, Stith T

    The effects of forest management on soil carbon (C) and nitrogen (N) dynamics vary by harvest type and species. We simulated long-term effects of bole-only harvesting of aspen (Populus tremuloides) on stand productivity and interaction of CN cycles with a multiple model approach. Five models, Biome-BGC, CENTURY, FORECAST, LANDIS-II with Century-based soil dynamics, and PnET-CN, were run for 350 yr with seven harvesting events on nutrient-poor, sandy soils representing northwestern Wisconsin, United States. Twenty CN state and flux variables were summarized from the models' outputs and statistically analyzed using ordination and variance analysis methods. The multiple models' averages suggest that bole-only harvest would not significantly affect long-term site productivity of aspen, though declines in soil organic matter and soil N were significant. Along with direct N removal by harvesting, extensive leaching after harvesting before canopy closure was another major cause of N depletion. These five models were notably different in output values of the 20 variables examined, although there were some similarities for certain variables. PnET-CN produced unique results for every variable, and CENTURY showed fewer outliers and similar temporal patterns to the mean of all models. In general, we demonstrated that when there are no site-specific data for fine-scale calibration and evaluation of a single model, the multiple model approach may be a more robust approach for long-term simulations. In addition, multimodeling may also improve the calibration and evaluation of an individual model.

  16. Brominated Luciferins Are Versatile Bioluminescent Probes

    DOE PAGES

    Steinhardt, Rachel C.; Rathbun, Colin M.; Krull, Brandon T.; ...

    2016-12-08

    Here, we report a set of brominated luciferins for bioluminescence imaging. These regioisomeric scaffolds were accessed by using a common synthetic route. All analogues produced light with firefly luciferase, although varying levels of emission were observed. Differences in photon output were analyzed by computation and photophysical measurements. The brightest brominated luciferin was further evaluated in cell and animal models. At low doses, the analogue outperformed the native substrate in cells. The remaining luciferins, although weak emitters with firefly luciferase, were inherently capable of light production and thus potential substrates for orthogonal mutant enzymes.

  17. The Creditworthiness of Eastern Europe in the 1980s.

    DTIC Science & Technology

    1985-01-01

    OMPoT NuMs 7. AuTI"*Mf.) IL CONTRACT OR GNANT NUMS19AM Keith Crane MDA903-83-C-0148 9. pnrFoffwnM AIO iNMAM A "iv CT TAMK The Rand Corporation 1700 Main...These supply-side models capture the constraints on output imposed by balance of payments pressures. Poland Six scenarios are formulated to analyze... rational at the time.9 Other East European countries and many of the developing countries pursued similar policies with varying degrees of success-e.g

  18. An Attempt to Measure the Traffic Impact of Airline Alliances

    NASA Technical Reports Server (NTRS)

    Iatrou, Kostas; Skourias, Nikolaos

    2005-01-01

    This paper analyzes the effects of airline alliances on the allied partners output by comparing the traffic change observed between the pre- and the post-alliance period. First, a simple methodology based on traffic passenger modelling is developed, and then an empirical analysis is conducted using time series from four global strategic alliances (Wings, Star Alliance, oneworld and SkyTeam) and 124 alliance routes. The analysis concludes that, all other things being equal, strategic alliances do lead to a 9.4%, on average, improvement in passenger volume.

  19. Research on a new wave energy absorption device

    NASA Astrophysics Data System (ADS)

    Lu, Zhongyue; Shang, Jianzhong; Luo, Zirong; Sun, Chongfei; Zhu, Yiming

    2018-01-01

    To reduce impact of global warming and the energy crisis problems caused by pollution of energy combustion, the research on renewable and clean energies becomes more and more important. This paper designed a new wave absorption device, and also gave an introduction on its mechanical structure. The flow tube model is analyzed, and presented the formulation of the proposed method. To verify the principle of wave absorbing device, an experiment was carried out in a laboratory environment, and the results of the experiment can be applied for optimizing the structure design of output power.

  20. Simulation study of communication link for Pioneer Saturn/Uranus atmospheric entry probe. [signal acquisition by candidate modem for radio link

    NASA Technical Reports Server (NTRS)

    Hinrichs, C. A.

    1974-01-01

    A digital simulation is presented for a candidate modem in a modeled atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the radio link conditions for an outer planets atmospheric entry probe. The results indicate that the signal acquisition characteristics and the channel error rate are acceptable for the system requirements of the radio link. The simulation also outputs data for calculating other error statistics and a quantized symbol stream from which error correction decoding can be analyzed.

  1. Mechanical characteristics of distension-evoked peristaltic contractions in the esophagus of systemic sclerosis patients.

    PubMed

    Gregersen, Hans; Villadsen, Gerda E; Liao, Donghua

    2011-12-01

    Systemic sclerosis (SS) patients with severe esophageal affection have impaired peristalsis. However, motor function evaluated in vivo by manometry and fluoroscopy does not provide detailed information about the individual contraction cycles. To apply, for the first time in gastrointestinal (GI) patients, a method and principles modified from cardiac research to study esophageal muscle behavior in SS patients. Muscle contraction cycles were analyzed using pressure-cross-sectional area (P-CSA) loops during distension pressure up to 5 kPa. The probe with bag and electrodes for CSA measurements was positioned 7 and 15 cm above the lower esophageal sphincter (LES) in eleven healthy volunteers and eleven SS patients. The P-CSA, the wall tension, Δtension (afterload tension - preload tension), contraction velocity, work output (area of the tension-CSA loops), and power output (preload tension × CSA rate) were analyzed. The P-CSA loops consisted of phases with relaxation and contraction behavior. The tension-stretch ratio loops in patients were shifted to the left at both distension sites, indicative of a stiffer wall in patients. Lower contraction amplitudes and smaller P-CSA loops were observed for the SS patients. The work output, power output, Δtension, and contraction velocity were lower in patients (P < 0.001). Association was found between disease duration and the work output, Δtension, and velocity at pressure steps higher than 3 kPa (P < 0.05). Distension-evoked esophageal contraction can be studied in vivo and analyzed with advanced methods. Increased esophageal stiffness and impaired muscle function that depended on disease duration were observed for SS patients. The analysis may be useful for characterization of other diseases affecting GI function.

  2. SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.

    PubMed

    Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S

    2012-06-01

    To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.

  3. Analysis of laser pumping and thermal effects based on element analysis

    NASA Astrophysics Data System (ADS)

    Cui, Li; Liu, Zhijia; Zhang, Yizhuo; Han, Juan

    2018-03-01

    Thermal effect is a plateau that limits the output of high-power, high beam quality laser, and thermal effects become worse with the increase of pump power. We can reduce the effects caused by thermal effects from pumping, laser medium shape, cooling method and other aspects. In this article, by using finite element analysis software, the thermal effects between Nd:Glass and Nd:YAG laser crystal was analyzed and compared. The causes of generation for thermal effects, and factors that influence the distribution in laser medium were analyzed, including the light source, the laser medium shape and the working mode. Nd:Glass is more suitable for low repetition frequency, high energy pulsed laser output, due to its large size, line width and so on, and Nd:YAG is more suitable for continue or high repetition rate laser output, due to its higher thermal conductivity.

  4. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  5. Robust global identifiability theory using potentials--Application to compartmental models.

    PubMed

    Wongvanich, N; Hann, C E; Sirisena, H R

    2015-04-01

    This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  7. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  8. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.

  9. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.

  10. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less

  11. Efficiency assessment of wastewater treatment plants: A data envelopment analysis approach integrating technical, economic, and environmental issues.

    PubMed

    Castellet, Lledó; Molinos-Senante, María

    2016-02-01

    The assessment of the efficiency of wastewater treatment plants (WWTPs) is essential to compare their performance and consequently to identify the best operational practices that can contribute to the reduction of operational costs. Previous studies have evaluated the efficiency of WWTPs using conventional data envelopment analysis (DEA) models. Most of these studies have considered the operational costs of the WWTPs as inputs, while the pollutants removed from wastewater are treated as outputs. However, they have ignored the fact that each pollutant removed by a WWTP involves a different environmental impact. To overcome this limitation, this paper evaluates for the first time the efficiency of a sample of WWTPs by applying the weighted slacks-based measure model. It is a non-radial DEA model which allows assigning weights to the inputs and outputs according their importance. Thus, the assessment carried out integrates environmental issues with the traditional "techno-economic" efficiency assessment of WWTPs. Moreover, the potential economic savings for each cost item have been quantified at a plant level. It is illustrated that the WWTPs analyzed have significant room to save staff and energy costs. Several managerial implications to help WWTPs' operators make informed decisions were drawn from the methodology and empirical application carried out. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Intensity and duration of activity bouts decreases in healthy children between 7 and 13 years of age: a new, higher resolution method to analyze StepWatch Activity Monitor data.

    PubMed

    Tulchin-Francis, Kirsten; Stevens, Wilshaw; Jeans, Kelly A

    2014-11-01

    Assessment of physical, ambulatory, activity using accelerometer-based devices has been reported in healthy individuals across a wide range of ages, as well as in multiple patient populations. Many researchers who utilize the StepWatch Activity Monitor (SAM) rely on the default settings for data collection and analysis. A comparison was made between the standard output from the SAM software, and a novel method to evaluate all walking bouts using an Intensity-Duration-Volume (I-D-V) model in healthy children aged 7-13. 105 children without impairment wore the SAM for a total of 1691 d. Statistically significant differences were seen between 7-8-9 year olds and 10-11-12 year olds using the I-D-V model that were not seen using the standard SAM software default output. The increased sensitivity of this technique could be critical for observing the effect of various interventions on patients who experience physical limitations. This new analytical model also allows researchers to monitor activity and exercise-type behavior in a way which coincides with exercise prescription by assessing intensity, duration and volume of activity bouts.

  13. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  14. Demographic and Academic Factors Affecting Research Productivity at the University of KwaZulu-Natal

    ERIC Educational Resources Information Center

    North, D.; Zewotir, T.; Murray, M.

    2011-01-01

    Research output affects both the strength and funding of universities. Accordingly university academic staff members are under pressure to be active and productive in research. Though all academics have research interest, all are not producing research output which is accredited by the Department of Education (DOE). We analyzed the demographic and…

  15. Impact of device level faults in a digital avionic processor

    NASA Technical Reports Server (NTRS)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  16. Dynamic model inversion techniques for breath-by-breath measurement of carbon dioxide from low bandwidth sensors.

    PubMed

    Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D

    2009-01-01

    Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.

  17. Integrated, multi-scale, spatial-temporal cell biology--A next step in the post genomic era.

    PubMed

    Horwitz, Rick

    2016-03-01

    New microscopic approaches, high-throughput imaging, and gene editing promise major new insights into cellular behaviors. When coupled with genomic and other 'omic information and "mined" for correlations and associations, a new breed of powerful and useful cellular models should emerge. These top down, coarse-grained, and statistical models, in turn, can be used to form hypotheses merging with fine-grained, bottom up mechanistic studies and models that are the back bone of cell biology. The goal of the Allen Institute for Cell Science is to develop the top down approach by developing a high throughput microscopy pipeline that is integrated with modeling, using gene edited hiPS cell lines in various physiological and pathological contexts. The output of these experiments and models will be an "animated" cell, capable of integrating and analyzing image data generated from experiments and models. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A fast and low-cost genotyping method for hepatitis B virus based on pattern recognition in point-of-care settings

    PubMed Central

    Qiu, Xianbo; Song, Liuwei; Yang, Shuo; Guo, Meng; Yuan, Quan; Ge, Shengxiang; Min, Xiaoping; Xia, Ningshao

    2016-01-01

    A fast and low-cost method for HBV genotyping especially for genotypes A, B, C and D was developed and tested. A classifier was used to detect and analyze a one-step immunoassay lateral flow strip functionalized with genotype-specific monoclonal antibodies (mAbs) on multiple capture lines in the form of pattern recognition for point-of-care (POC) diagnostics. The fluorescent signals from the capture lines and the background of the strip were collected via multiple optical channels in parallel. A digital HBV genotyping model, whose inputs are the fluorescent signals and outputs are a group of genotype-specific digital binary codes (0/1), was developed based on the HBV genotyping strategy. Meanwhile, a companion decoding table was established to cover all possible pairing cases between the states of a group of genotype-specific digital binary codes and the HBV genotyping results. A logical analyzing module was constructed to process the detected signals in parallel without program control, and its outputs were used to drive a set of LED indicators, which determine the HBV genotype. Comparing to the nucleic acid analysis to HBV viruses, much faster HBV genotyping with significantly lower cost can be obtained with the developed method. PMID:27306485

  19. Design of an improved RCD buffer circuit for full bridge circuit

    NASA Astrophysics Data System (ADS)

    Yang, Wenyan; Wei, Xueye; Du, Yongbo; Hu, Liang; Zhang, Liwei; Zhang, Ou

    2017-05-01

    In the full bridge inverter circuit, when the switch tube suddenly opened or closed, the inductor current changes rapidly. Due to the existence of parasitic inductance of the main circuit. Therefore, the surge voltage between drain and source of the switch tube can be generated, which will have an impact on the switch and the output voltage. In order to ab sorb the surge voltage. An improve RCD buffer circuit is proposed in the paper. The peak energy will be absorbed through the buffer capacitor of the circuit. The part energy feedback to the power supply, another part release through the resistor in the form of heat, and the circuit can absorb the voltage spikes. This paper analyzes the process of the improved RCD snubber circuit, According to the specific parameters of the main circuit, a reasonable formula for calculating the resistance capacitance is given. A simulation model will be modulated in Multisim, which compared the waveform of tube voltage and the output waveform of the circuit without snubber circuit with the improved RCD snubber circuit. By comparing and analyzing, it is proved that the improved buffer circuit can absorb surge voltage. Finally, experiments are demonstrated to validate that the correctness of the RC formula and the improved RCD snubber circuit.

  20. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  1. Trajectory and Concentration PM10 on Forest and Vegetation Peat-Fire HYSPLIT Model Outputs and Observations (Period: September - October 2015)

    NASA Astrophysics Data System (ADS)

    Khairullah; Effendy, S.; Makmur, E. E. S.

    2017-03-01

    Forest and vegetation peat-fire is one of the main sources of air pollution in Kalimantan, predominantly during the dry period. In 2015, forest and vegetation fire in Central Kalimantan and South Kalimantan emit large quantities of smoke leading to poor air quality. Haze is a phenomenon characterized by high concentration of particulate matter. This research objective is to analyze trajectory and dispersion of concentration particulate matter, PM10 in Banjarbaru and Palangka Raya. Dynamics of PM10 (Particulate matter less than or 10 µm in size) on vegetation peat-fire is done using GDAS (Global Data Assimilation System) output with a horizontal resolution 1º which corresponds to 100 km × 100 km for input model. Climate conditions in the period September to October 2015 at generally during dry season of El Nino year. The Hybrid-single Langrangian Integrated Trajectory (HYSPLIT) model was used to investigate concentration and long-range movement of this pollutant from the source to the receptor area. We used time-series data on PM10 readings obtained from two stations Banjarbaru (South Kalimantan) and Palangka Raya (Central Kalimantan) belonging to Meteorology Climatology and Geophysics Agency (BMKG). We also used weather parameter such as wind speed and direction. We investigated trajectory run from hotspots information MoF (Sipongi Output Programs) and HYSPLIT. We compared concentration obtained from PM10 observation and its concentrations trend. The dispersion pattern, as simulated by HYSPLIT showed that the distribution of PM10 was greatly influenced by the wind direction and topography. There is a large difference between the concentration of PM10 Palangka Raya and Banjarbaru.

  2. Effects of climate change on hydrology and hydraulics of Qu River Basin, East China.

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhu, Q.; Zhao, Z.; Pan, S.; Xu, Y. P.

    2015-12-01

    The impacts of climate change on regional hydrological extreme events have attracted much attention in recent years. This paper aims to provide a general overview of changes on future runoffs and water levels in the Qu River Basin, upper reaches of Qiantang River, East China by combining future climate scenarios, hydrological model and 1D hydraulic model. The outputs of four GCMs BCC, BNU, CanESM and CSIRO under two scenarios RCP4.5 and RCP8.5 for 2021-2050 are chosen to represent future climate change projections. The LARS-WG statistical downscaling method is used to downscale the coarse GCM outputs and generate 50 years of synthetic precipitation and maximum and minimum temperatures to drive the GR4J hydrological model and the 1D hydraulic model for the baseline period 1971-2000 and the future period 2021-2050. Finally the POT (Peaks Over Threshold) method is applied to analyze the change of extreme events in the study area. The results show that design runoffs and water levels all indicate an increasing trend in the future period for Changshangang River, Jiangshangang River and Qu River at most cases, especially for small return periods(≤20), and for Qu River the increase becomes larger, which suggests that the risk of flooding will probably become greater and appropriate adaptation measures need to be taken.

  3. Building Simulation Modelers are we big-data ready?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less

  4. The Vulnerability, Impacts, Adaptation and Climate Services Advisory Board (VIACS AB V1.0) Contribution to CMIP6

    NASA Technical Reports Server (NTRS)

    Ruane, Alex C.; Teichmann, Claas; Arnell, Nigel W.; Carter, Timothy R.; Ebi, Kristie L.; Frieler, Katja; Goodess, Clare M.; Hewitson, Bruce; Horton, Radley; Kovats, R. Sari; hide

    2016-01-01

    This paper describes the motivation for the creation of the Vulnerability, Impacts, Adaptation and Climate Services (VIACS) Advisory Board for the Sixth Phase of the Coupled Model Intercomparison Project (CMIP6), its initial activities, and its plans to serve as a bridge between climate change applications experts and climate modelers. The climate change application community comprises researchers and other specialists who use climate information (alongside socioeconomic and other environmental information) to analyze vulnerability, impacts, and adaptation of natural systems and society in relation to past, ongoing, and projected future climate change. Much of this activity is directed toward the co-development of information needed by decisionmakers for managing projected risks. CMIP6 provides a unique opportunity to facilitate a two-way dialog between climate modelers and VIACS experts who are looking to apply CMIP6 results for a wide array of research and climate services objectives. The VIACS Advisory Board convenes leaders of major impact sectors, international programs, and climate services to solicit community feedback that increases the applications relevance of the CMIP6-Endorsed Model Intercomparison Projects (MIPs). As an illustration of its potential, the VIACS community provided CMIP6 leadership with a list of prioritized climate model variables and MIP experiments of the greatest interest to the climate model applications community, indicating the applicability and societal relevance of climate model simulation outputs. The VIACS Advisory Board also recommended an impacts version of Obs4MIPs (observational datasets) and indicated user needs for the gridding and processing of model output.

  5. The Vulnerability, Impacts, Adaptation and Climate Services Advisory Board (VIACS AB v1.0) contribution to CMIP6

    NASA Astrophysics Data System (ADS)

    Ruane, Alex C.; Teichmann, Claas; Arnell, Nigel W.; Carter, Timothy R.; Ebi, Kristie L.; Frieler, Katja; Goodess, Clare M.; Hewitson, Bruce; Horton, Radley; Sari Kovats, R.; Lotze, Heike K.; Mearns, Linda O.; Navarra, Antonio; Ojima, Dennis S.; Riahi, Keywan; Rosenzweig, Cynthia; Themessl, Matthias; Vincent, Katharine

    2016-09-01

    This paper describes the motivation for the creation of the Vulnerability, Impacts, Adaptation and Climate Services (VIACS) Advisory Board for the Sixth Phase of the Coupled Model Intercomparison Project (CMIP6), its initial activities, and its plans to serve as a bridge between climate change applications experts and climate modelers. The climate change application community comprises researchers and other specialists who use climate information (alongside socioeconomic and other environmental information) to analyze vulnerability, impacts, and adaptation of natural systems and society in relation to past, ongoing, and projected future climate change. Much of this activity is directed toward the co-development of information needed by decision-makers for managing projected risks. CMIP6 provides a unique opportunity to facilitate a two-way dialog between climate modelers and VIACS experts who are looking to apply CMIP6 results for a wide array of research and climate services objectives. The VIACS Advisory Board convenes leaders of major impact sectors, international programs, and climate services to solicit community feedback that increases the applications relevance of the CMIP6-Endorsed Model Intercomparison Projects (MIPs). As an illustration of its potential, the VIACS community provided CMIP6 leadership with a list of prioritized climate model variables and MIP experiments of the greatest interest to the climate model applications community, indicating the applicability and societal relevance of climate model simulation outputs. The VIACS Advisory Board also recommended an impacts version of Obs4MIPs and indicated user needs for the gridding and processing of model output.

  6. Topic models: A novel method for modeling couple and family text data

    PubMed Central

    Atkins, David C.; Rubin, Tim N.; Steyvers, Mark; Doeden, Michelle A.; Baucom, Brian R.; Christensen, Andrew

    2012-01-01

    Couple and family researchers often collect open-ended linguistic data – either through free response questionnaire items or transcripts of interviews or therapy sessions. Because participant's responses are not forced into a set number of categories, text-based data can be very rich and revealing of psychological processes. At the same time it is highly unstructured and challenging to analyze. Within family psychology analyzing text data typically means applying a coding system, which can quantify text data but also has several limitations, including the time needed for coding, difficulties with inter-rater reliability, and defining a priori what should be coded. The current article presents an alternative method for analyzing text data called topic models (Steyvers & Griffiths, 2006), which has not yet been applied within couple and family psychology. Topic models have similarities with factor analysis and cluster analysis in that topic models identify underlying clusters of words with semantic similarities (i.e., the “topics”). In the present article, a non-technical introduction to topic models is provided, highlighting how these models can be used for text exploration and indexing (e.g., quickly locating text passages that share semantic meaning) and how output from topic models can be used to predict behavioral codes or other types of outcomes. Throughout the article a collection of transcripts from a large couple therapy trial (Christensen et al., 2004) is used as example data to highlight potential applications. Practical resources for learning more about topic models and how to apply them are discussed. PMID:22888778

  7. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  8. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    NASA Astrophysics Data System (ADS)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  9. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  10. A comparison of methods for assessing power output in non-uniform onshore wind farms

    DOE PAGES

    Staid, Andrea; VerHulst, Claire; Guikema, Seth D.

    2017-10-02

    Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less

  11. A comparison of methods for assessing power output in non-uniform onshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staid, Andrea; VerHulst, Claire; Guikema, Seth D.

    Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshoremore » farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. In conclusion, we show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.« less

  12. Active disturbance rejection control based robust output feedback autopilot design for airbreathing hypersonic vehicles.

    PubMed

    Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong

    2018-03-01

    Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Capacity withholding in wholesale electricity markets: The experience in England and Wales

    NASA Astrophysics Data System (ADS)

    Quinn, James Arnold

    This thesis examines the incentives wholesale electricity generators face to withhold generating capacity from centralized electricity spot markets. The first chapter includes a brief history of electricity industry regulation in England and Wales and in the United States, including a description of key institutional features of England and Wales' restructured electricity market. The first chapter also includes a review of the literature on both bid price manipulation and capacity bid manipulation in centralized electricity markets. The second chapter details a theoretical model of wholesale generator behavior in a single price electricity market. A duopoly model is specified under the assumption that demand is non-stochastic. This model assumes that duopoly generators offer to sell electricity at their marginal cost, but can withhold a continuous segment of their capacity from the market. The Nash equilibrium withholding strategy of this model involves each duopoly generator withholding so that it produces the Cournot equilibrium output. A monopoly model along the lines of the duopoly model is specified and simulated under the assumption that demand is stochastic. The optimal strategy depends on the degree of demand uncertainty. When there is a moderate degree of demand uncertainty, the optimal withholding strategy involves production inefficiencies. When there is a high degree of demand uncertainty, the optimal monopoly quantity is greater than the optimal output level when demand is non-stochastic. The third chapter contains an empirical examination of the behavior of generators in the wholesale electricity market in England and Wales in the early 1990's. The wholesale market in England and Wales is analyzed because the industry structure in the early 1990's created a natural experiment, which is described in this chapter, whereby one of the two dominant generators had no incentive to behave non-competitively. This chapter develops a classification methodology consistent with the equilibrium identified in the second chapter. The availability of generating units owned by the two dominant generators is analyzed based on this classification system. This analysis includes the use of sample statistics as well as estimates from a dynamic random effects probit model. The analysis suggests a minimal degree of capacity withholding.

  14. SimBAL: A Spectral Synthesis Approach to Analyzing Broad Absorption Line Quasar Spectra

    NASA Astrophysics Data System (ADS)

    Terndrup, Donald M.; Leighly, Karen; Gallagher, Sarah; Richards, Gordon T.

    2017-01-01

    Broad Absorption Line quasars (BALQSOs) show blueshifted absorption lines in their rest-UV spectra, indicating powerful winds emerging from the central engine. These winds are essential part of quasars: they can carry away angular momentum and thus facilitate accretion through a disk, they can distribute chemically-enriched gas through the intergalactic medium, and they may inject kinetic energy to the host galaxy, influencing its evolution. The traditional method of analyzing BALQSO spectra involves measuring myriad absorption lines, computing the inferred ionic column densities in each feature, and comparing with the output of photonionization models. This method is inefficient and does not handle line blending well. We introduce SimBAL, a spectral synthesis fitting method for BALQSOs, which compares synthetic spectra created from photoionization model results with continuum-normalized observed spectra using Bayesian model calibration. We find that we can obtain an excellent fit to the UV to near-IR spectrum of the low-redshift BALQSO SDSS J0850+4451, including lines from diverse ionization states such as PV, CIII*, SIII, Lyalpha, NV, SiIV, CIV, MgII, and HeI*.

  15. [Discrimination of varieties of brake fluid using visual-near infrared spectra].

    PubMed

    Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong

    2008-06-01

    A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.

  16. A computer program for analyzing the energy consumption of automatically controlled lighting systems

    NASA Astrophysics Data System (ADS)

    1982-01-01

    A computer code to predict the performance of controlled lighting systems with respect to their energy saving capabilities is presented. The computer program provides a mathematical model from which comparisons of control schemes can be made on an economic basis only. The program does not calculate daylighting, but uses daylighting values as input. The program can analyze any of three power input versus light output relationships, continuous dimming with a linear response, continuous dimming with a nonlinear response, or discrete stepped response. Any of these options can be used with or without daylighting, making six distinct modes of control system operation. These relationships are described in detail. The major components of the program are discussed and examples are included to explain how to run the program.

  17. Blazed Gratings Recorded in Absorbent Photopolymers.

    PubMed

    Fernández, Roberto; Gallego, Sergi; Márquez, Andrés; Navarro-Fuster, Víctor; Beléndez, Augusto

    2016-03-15

    Phase diffractive optical elements, which have many interesting applications, are usually fabricated using a photoresist. In this paper, they were made using a hybrid optic-digital system and a photopolymer as recording medium. We analyzed the characteristics of the input and recording light and then simulated the generation of blazed gratings with different spatial periods in different types of photopolymers using a diffusion model. Finally, we analyzed the output and diffraction efficiencies of the 0 and 1st order so as to compare the simulated values with those measured experimentally. We evaluated the effects of index matching in a standard PVA/AA photopolymer, and in a variation of Biophotopol, a more biocompatible photopolymer. Diffraction efficiencies near 70%, for a wavelength of 633 nm, were achieved for periods longer than 300 µm in this kind of materials.

  18. Spanish scientific output on Helicobacter pylori. A study through Medline.

    PubMed

    Trapero-Marugán, M; Gisbert, J P; Pajares, J M

    2006-04-01

    to analyze scientific output from Spanish hospitals in relation to Helicobacter pylori infection. papers collected from the Medline database between January 1988 and December 2003 were selected. Our search strategy was: "Helicobacter pylori" [MeSH] AND ((Spain [AD] OR Espana [AD] OR Spanien [AD] OR Espagne [AD] OR Espanha [AD]) OR (Spanish [LA]) OR Spain). The following was analyzed: geographic area, Spanish or foreign publication, topic, and year of publication. Output and impact bibliometric markers were evaluated. in all, 691 papers were identified, of which 241 were excluded. Number of papers went from 2 in 1988 to 47 in 2002 and 13 in 2003. There were more reports in Spanish versus foreign journals (58 vs. 42%). In the first 5 years the areas with greater output were associated with diagnosis and microbiology (33 and 20%), whereas therapy was the predominating subject during the last 5 years (27%). Original papers were most common among publications (69%). Hospitals with highest output included La Princesa (24%) and Ramón y Cajal (17.6%) in Madrid, and Parc Taulí in Barcelona (6.4%). Mean impact factor progressively increased from 1.826 in 1988 to 2.142 in 2002 and 2.493 in 2003. the production and impact of documents published by Spanish scientists regarding H. pylori infection considerably increased during the past two decades.

  19. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  20. A Comparison of Model Short-Range Forecasts and the ARM Microbase Data Fourth Quarter ARM Science Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hnilo, J.

    2006-09-19

    For the fourth quarter ARM metric we will make use of new liquid water data that has become available, and called the “Microbase” value added product (referred to as OBS, within the text) at three sites: the North Slope of Alaska (NSA), Tropical West Pacific (TWP) and the Southern Great Plains (SGP) and compare these observations to model forecast data. Two time periods will be analyzed March 2000 for the SGP and October 2004 for both TWP and NSA. The Microbase data have been averaged to 35 pressure levels (e.g., from 1000hPa to 100hPa at 25hPa increments) and time averagedmore » to 3hourly data for direct comparison to our model output.« less

  1. Nonlinear dynamics of the magnetosphere and space weather

    NASA Technical Reports Server (NTRS)

    Sharma, A. Surjalal

    1996-01-01

    The solar wind-magnetosphere system exhibits coherence on the global scale and such behavior can arise from nonlinearity on the dynamics. The observational time series data were used together with phase space reconstruction techniques to analyze the magnetospheric dynamics. Analysis of the solar wind, auroral electrojet and Dst indices showed low dimensionality of the dynamics and accurate prediction can be made with an input/output model. The predictability of the magnetosphere in spite of the apparent complexity arises from its dynamical synchronism with the solar wind. The electrodynamic coupling between different regions of the magnetosphere yields its coherent, low dimensional behavior. The data from multiple satellites and ground stations can be used to develop a spatio-temporal model that identifies the coupling between different regions. These nonlinear dynamical models provide space weather forecasting capabilities.

  2. Characterization and Modeling of Nano-organic Thin Film Phototransistors Based on 6,13(Triisopropylsilylethynyl)-Pentacene: Photovoltaic Effect

    NASA Astrophysics Data System (ADS)

    Jouili, A.; Mansouri, S.; Al-Ghamdi, Ahmed A.; El Mir, L.; Farooq, W. A.; Yakuphanoglu, F.

    2017-04-01

    Organic thin film transistors based on 6,13(triisopropylsilylethynyl)-pentacene (TIPS-pentacene) with various channel widths and thicknesses of the active layer (300 nm and 135 nm) were photo-characterized. The photoresponse behavior and the gate field dependence of the charge transport were analyzed in detail. The surface properties of TIPS-pentacene deposited on silicon dioxide substrate were investigated using an atomic force microscope. We confirm that the threshold voltage values of the TIPS-pentacene transistor depend on the intensity of white light illumination. With the multiple trapping and release model, we have developed an analytical model that was applied to reproduce the experimental output characteristics of organic thin film transistors based on TIPS-pentacene under dark and under light illumination.

  3. ICAN/PART: Particulate composite analyzer, user's manual and verification studies

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.

    1996-01-01

    A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.

  4. Mode transition coordinated control for a compound power-split hybrid car

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Zhao, Zhiguo; Zhang, Tong; Li, Mengna

    2017-03-01

    With a compound power-split transmission directly connected to the engine in hybrid cars, dramatic fluctuations in engine output torque result in noticeable jerks when the car is in mode transition from electric drive mode to hybrid drive mode. This study designed a mode transition coordinated control strategy, and verified that strategy's effectiveness with both simulations and experiments. Firstly, the mode transition process was analyzed, and ride comfort issues during the mode transition process were demonstrated. Secondly, engine ripple torque was modeled using the measured cylinder pumping pressure when the engine was not in operation. The complete dynamic plant model of the power-split hybrid car was deduced, and its effectiveness was validated by a comparison of experimental and simulation results. Thirdly, a coordinated control strategy was designed to determine the desired engine torque, motor torque, and the moment of fuel injection. Active damping control with two degrees of freedom, based on reference output shaft speed estimation, was designed to mitigate driveline speed oscillations. Carrier torque estimation based on transmission kinematics and dynamics was used to suppress torque disturbance during engine cranking. The simulation and experimental results indicate that the proposed strategy effectively suppressed vehicle jerks and improved ride comfort during mode transition.

  5. Multiregional input-output model for China's farm land and water use.

    PubMed

    Guo, Shan; Shen, Geoffrey Qiping

    2015-01-06

    Land and water are the two main drivers of agricultural production. Pressure on farm land and water resources is increasing in China due to rising food demand. Domestic trade affects China's regional farm land and water use by distributing resources associated with the production of goods and services. This study constructs a multiregional input-output model to simultaneously analyze China's farm land and water uses embodied in consumption and interregional trade. Results show a great similarity for both China's farm land and water endowments. Shandong, Henan, Guangdong, and Yunnan are the most important drivers of farm land and water consumption in China, even though they have relatively few land and water resource endowments. Significant net transfers of embodied farm land and water flows are identified from the central and western areas to the eastern area via interregional trade. Heilongjiang is the largest farm land and water supplier, in contrast to Shanghai as the largest receiver. The results help policy makers to comprehensively understand embodied farm land and water flows in a complex economy network. Improving resource utilization efficiency and reshaping the embodied resource trade nexus should be addressed by considering the transfer of regional responsibilities.

  6. Drifts and Environmental Disturbances in Atomic Clock Subsystems: Quantifying Local Oscillator, Control Loop, and Ion Resonance Interactions.

    PubMed

    Enzer, Daphna G; Diener, William A; Murphy, David W; Rao, Shanti R; Tjoelker, Robert L

    2017-03-01

    Linear ion trap frequency standards are among the most stable continuously operating frequency references and clocks. Depending on the application, they have been operated with a variety of local oscillators (LOs), including quartz ultrastable oscillators, hydrogen-masers, and cryogenic sapphire oscillators. The short-, intermediate-, and long-term stability of the frequency output is a complicated function of the fundamental performances, the time dependence of environmental disturbances, the atomic interrogation algorithm, the implemented control loop, and the environmental sensitivity of the LO and the atomic system components. For applications that require moving these references out of controlled lab spaces and into less stable environments, such as fieldwork or spaceflight, a deeper understanding is needed of how disturbances at different timescales impact the various subsystems of the clock and ultimately the output stability. In this paper, we analyze which perturbations have an impact and to what degree. We also report on a computational model of a control loop, which keeps the microwave source locked to the ion resonance. This model is shown to agree with laboratory measurements of how well the feedback removes various disturbances and also with a useful analytic approach we developed for predicting these impacts.

  7. Beyond R 0: Demographic Models for Variability of Lifetime Reproductive Output

    PubMed Central

    Caswell, Hal

    2011-01-01

    The net reproductive rate measures the expected lifetime reproductive output of an individual, and plays an important role in demography, ecology, evolution, and epidemiology. Well-established methods exist to calculate it from age- or stage-classified demographic data. As an expectation, provides no information on variability; empirical measurements of lifetime reproduction universally show high levels of variability, and often positive skewness among individuals. This is often interpreted as evidence of heterogeneity, and thus of an opportunity for natural selection. However, variability provides evidence of heterogeneity only if it exceeds the level of variability to be expected in a cohort of identical individuals all experiencing the same vital rates. Such comparisons require a way to calculate the statistics of lifetime reproduction from demographic data. Here, a new approach is presented, using the theory of Markov chains with rewards, obtaining all the moments of the distribution of lifetime reproduction. The approach applies to age- or stage-classified models, to constant, periodic, or stochastic environments, and to any kind of reproductive schedule. As examples, I analyze data from six empirical studies, of a variety of animal and plant taxa (nematodes, polychaetes, humans, and several species of perennial plants). PMID:21738586

  8. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  9. Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles

    PubMed Central

    Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.

    2009-01-01

    The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382

  10. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  11. Topic models: a novel method for modeling couple and family text data.

    PubMed

    Atkins, David C; Rubin, Timothy N; Steyvers, Mark; Doeden, Michelle A; Baucom, Brian R; Christensen, Andrew

    2012-10-01

    Couple and family researchers often collect open-ended linguistic data-either through free-response questionnaire items, or transcripts of interviews or therapy sessions. Because participants' responses are not forced into a set number of categories, text-based data can be very rich and revealing of psychological processes. At the same time, it is highly unstructured and challenging to analyze. Within family psychology, analyzing text data typically means applying a coding system, which can quantify text data but also has several limitations, including the time needed for coding, difficulties with interrater reliability, and defining a priori what should be coded. The current article presents an alternative method for analyzing text data called topic models (Steyvers & Griffiths, 2006), which has not yet been applied within couple and family psychology. Topic models have similarities to factor analysis and cluster analysis in that they identify underlying clusters of words with semantic similarities (i.e., the "topics"). In the present article, a nontechnical introduction to topic models is provided, highlighting how these models can be used for text exploration and indexing (e.g., quickly locating text passages that share semantic meaning) and how output from topic models can be used to predict behavioral codes or other types of outcomes. Throughout the article, a collection of transcripts from a large couple-therapy trial (Christensen et al., 2004) is used as example data to highlight potential applications. Practical resources for learning more about topic models and how to apply them are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  12. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; White, John A.

    2015-01-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971

  13. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations.

    PubMed

    Fernandez, Fernando R; Malerba, Paola; White, John A

    2015-04-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.

  14. Research Output of Academic Librarians from Irish Higher Education Institutions 2000-2015: Findings from a Review, Analysis, and Survey

    ERIC Educational Resources Information Center

    O'Brien, Terry; Cronin, Kieran

    2016-01-01

    The purpose of this paper is to quantify, review, and analyze published research output of academic librarians from 21 higher education Institutions in Ireland. A mixed approach using an online survey questionnaire, supplemented by content analysis and extensive literature scoping were used for data collection. Factors inhibiting and predicting…

  15. Conjoint analysis: a pragmatic approach for the accounting of multiple benefits in southern forest management

    Treesearch

    F. Christian Zinkhan; Thomas P. Holmes; D. Evan Mercer

    1994-01-01

    With conjoint analysis as its foundation, a practical approach for measuring the utility and dollar value of non-market outputs from southern forests is described and analyzed. The approach can be used in the process of evaluating alternative silvicultural and broader natural resource management plans when non-market as well as market outputs are recognized. When...

  16. Weakly nonlinear behavior of a plate thickness-mode piezoelectric transformer.

    PubMed

    Yang, Jiashi; Chen, Ziguang; Hu, Yuantai; Jiang, Shunong; Guo, Shaohua

    2007-04-01

    We analyzed the weakly nonlinear behavior of a plate thickness-shear mode piezoelectric transformer near resonance. An approximate analytical solution was obtained. Numerical results based on the analytical solution are presented. It is shown that on one side of the resonant frequency the input-output relation becomes nonlinear, and on the other side the output voltage experiences jumps.

  17. Thermo Scientific Ozone Analyzer Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springston, S. R.

    The primary measurement output from the Thermo Scientific Ozone Analyzer is the concentration of the analyte (O3) reported at 1-s resolution in units of ppbv in ambient air. Note that because of internal pneumatic switching limitations the instrument only makes an independent measurement every 4 seconds. Thus, the same concentration number is repeated roughly 4 times at the uniform, monotonic 1-s time base used in the AOS systems. Accompanying instrument outputs include sample temperatures, flows, chamber pressure, lamp intensities and a multiplicity of housekeeping information. There is also a field for operator comments made at any time while data ismore » being collected.« less

  18. System and method for resolving gamma-ray spectra

    DOEpatents

    Gentile, Charles A.; Perry, Jason; Langish, Stephen W.; Silber, Kenneth; Davis, William M.; Mastrovito, Dana

    2010-05-04

    A system for identifying radionuclide emissions is described. The system includes at least one processor for processing output signals from a radionuclide detecting device, at least one training algorithm run by the at least one processor for analyzing data derived from at least one set of known sample data from the output signals, at least one classification algorithm derived from the training algorithm for classifying unknown sample data, wherein the at least one training algorithm analyzes the at least one sample data set to derive at least one rule used by said classification algorithm for identifying at least one radionuclide emission detected by the detecting device.

  19. A scientometric analysis of Indian research output in medicine during 1999–2008

    PubMed Central

    Gupta, B. M.; Bala, Adarsh

    2011-01-01

    Objective: This study analyzes the research activities of India in medicine during 1999–2008, based on the total publication output, its growth rate, quality of papers published and rank of India in the global context. Patterns of international collaborative research output and the major partner countries of India are also discussed. This study also evaluates the research performance of different types of Indian medical colleges, hospitals, research institutes, universities and research foundations and the characteristics of published literature in Indian and foreign journals. It also analyzes the medical research output by disease and organs. Materials and Methods: The publication data on medicine has been retrieved by using SCOPUS database. Results: India holds 12th rank among the productive countries in medicine research consisting of 65,745 papers with a global publication share of 1.59% and registering a growth rate of 76.68% for the papers published during 1999–2003 to 2004–2008. Conclusion: High quality research in India is grossly inadequate and requires strategic planning, investment and resource support. There is also a need to improve the existing medical education system, which should foster research culture. PMID:22470241

  20. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    NASA Astrophysics Data System (ADS)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

Top