Sample records for generation dirsig model

  1. Empirical measurement and model validation of infrared spectra of contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay

    2015-05-01

    Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.

  2. Utilization of DIRSIG in support of real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Brown, Scott D.

    2000-07-01

    Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.

  3. Synthesis and Analysis of Custom Bi-directional Reflectivity Distribution Functions in DIRSIG

    NASA Astrophysics Data System (ADS)

    Dank, J.; Allen, D.

    2016-09-01

    The bi-directional reflectivity distribution (BRDF) function is a fundamental optical property of materials, characterizing important properties of light scattered by a surface. For accurate radiance calculations using synthetic targets and numerical simulations such as the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, fidelity of the target BRDFs is critical. While fits to measured BRDF data can be used in DIRSIG, obtaining high-quality data over a large spectral continuum can be time-consuming and expensive, requiring significant investment in illumination sources, sensors, and other specialized hardware. As a consequence, numerous parametric BRDF models are available to approximate actual behavior; but these all have shortcomings. Further, DIRSIG doesn't allow direct visualization of BRDFs, making it difficult for the user to understand the numerical impact of various models. Here, we discuss the innovative use of "mixture maps" to synthesize custom BRDFs as linear combinations of parametric models and measured data. We also show how DIRSIG's interactive mode can be used to visualize and analyze both available parametric models currently used in DIRSIG and custom BRDFs developed using our methods.

  4. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  5. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  6. Generation of a Combined Dataset of Simulated Radar and Electro-Optical Imagery

    DTIC Science & Technology

    2005-10-05

    directional reflectance distribution function (BRDF) predictions and the geometry of a line scanner. Using programs such as MODTRAN and FASCODE, images can be...DIRSIG tries to accurately model scenes through various approaches that model real- world occurrences. MODTRAN is an atmospheric radiative transfer code...used to predict path transmissions and radiances within the atmosphere (DIRSIG Manual, 2004). FASCODE is similar to MODTRAN , however it works as a

  7. Efficient generation of image chips for training deep learning algorithms

    NASA Astrophysics Data System (ADS)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.

  8. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  9. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  10. Modeling of forest canopy BRDF using DIRSIG

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan; Schott, John R.

    2016-05-01

    The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.

  11. Empirical Measurement and Model Validation of Infrared Spectra of Contaminated Surfaces

    NASA Astrophysics Data System (ADS)

    Archer, Sean

    The goal of this thesis was to validate predicted infrared spectra of liquid contaminated surfaces from a micro-scale bi-directional reflectance distribution function (BRDF) model through the use of empirical measurement. Liquid contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Image and Remote Sensing Image Generation (DIRSIG) model utilizes radiative transfer modeling to generate synthetic imagery for a variety of applications. Aside from DIRSIG, a micro-scale model known as microDIRSIG has been developed as a rigorous ray tracing physics-based model that could predict the BRDF of geometric surfaces that are defined as micron to millimeter resolution facets. The model offers an extension from the conventional BRDF models by allowing contaminants to be added as geometric objects to a micro-facet surface. This model was validated through the use of Fourier transform infrared spectrometer measurements. A total of 18 different substrate and contaminant combinations were measured and compared against modeled outputs. The substrates used in this experiment were wood and aluminum that contained three different paint finishes. The paint finishes included no paint, Krylon ultra-flat black, and Krylon glossy black. A silicon based oil (SF96) was measured out and applied to each surface to create three different contamination cases for each surface. Radiance in the longwave infrared region of the electromagnetic spectrum was measured by a Design and Prototypes (D&P) Fourier transform infrared spectrometer and a Physical Sciences Inc. Adaptive Infrared Imaging Spectroradiometer (AIRIS). The model outputs were compared against the measurements quantitatively in both the emissivity and radiance domains. A temperature emissivity separation (TES) algorithm had to be applied to the measured radiance spectra for comparison with the microDIRSIG predicted emissivity spectra. The model predicted emissivity spectra was also forward modeled through a DIRSIG simulation for comparisons to the radiance measurements. The results showed a promising agreement for homogeneous surfaces with liquid contamination that could be well characterized geometrically. Limitations arose in substrates that were modeled as homogeneous surfaces, but had spatially varying artifacts due to uncertainties with contaminant and surface interactions. There is high desire for accurate physics based modeling of liquid contaminated surfaces and this validation framework may be extended to include a wider array of samples for more realistic natural surfaces that are often found in real world scenarios.

  12. Data-driven simulations of the Landsat Data Continuity Mission (LDCM) platform

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Gartley, Mike; Schott, John; Raqueño, Nina; Raqueño, Rolando

    2011-06-01

    The Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) are two new sensors being developed by the Landsat Data Continuity Mission (LDCM) that will extend over 35 years of archived Landsat data. In a departure from the whiskbroom design used by all previous generations of Landsat, the LDCM system will employ a pushbroom technology. Although the newly adopted modular array, pushbroom architecture has several advantages over the previous whiskbroom design, registration of the multi-spectral data products is a concern. In this paper, the Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to simulate an LDCM collection, which gives the team access to data that would not otherwise be available prior to launch. The DIRSIG model was used to simulate the two-instrument LDCM payload in order to study the geometric and radiometric impacts of the sensor design on the proposed processing chain. The Lake Tahoe area located in eastern California was chosen for this work because of its dramatic change in elevation, which was ideal for studying the geometric effects of the new Landsat sensor design. Multi-modal datasets were used to create the Lake Tahoe site model for use in DIRSIG. National Elevation Dataset (NED) data were used to create the digital elevation map (DEM) required by DIRSIG, QuickBird data were used to identify different material classes in the scene, and ASTER and Hyperion spectral data were used to assign radiometric properties to those classes. In order to model a realistic Landsat orbit in these simulations, orbital parameters were obtained from a Landsat 7 two-line element set and propagated with the SGP4 orbital position model. Line-of-sight vectors defining how the individual detector elements of the OLI and TIRS instruments project through the optics were measured and provided by NASA. Additionally, the relative spectral response functions for the 9 bands of OLI and the 2 bands of TIRS were measured and provided by NASA. The instruments were offset on the virtual satellite and data recorders used to generate ephemeris data for downstream processing. Finally, potential platform jitter spectra were measured and provided by NASA and incorporated into the simulations. Simulated imagery generated by the model was incrementally provided to the rest of the LDCM team in a spiral development cycle to constantly refine the simulations.

  13. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  14. Signature simulation of mixed materials

    NASA Astrophysics Data System (ADS)

    Carson, Tyler D.; Salvaggio, Carl

    2015-05-01

    Soil target signatures vary due to geometry, chemical composition, and scene radiometry. Although radiative transfer models and function-fit physical models may describe certain targets in limited depth, the ability to incorporate all three signature variables is difficult. This work describes a method to simulate the transient signatures of soil by first considering scene geometry synthetically created using 3D physics engines. Through the assignment of spectral data from the Nonconventional Exploitation Factors Data System (NEFDS), the synthetic scene is represented as a physical mixture of particles. Finally, first principles radiometry is modeled using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. With DIRSIG, radiometric and sensing conditions were systematically manipulated to produce and record goniometric signatures. The implementation of this virtual goniometer allows users to examine how a target bidirectional reflectance distribution function (BRDF) will change with geometry, composition, and illumination direction. By using 3D computer graphics models, this process does not require geometric assumptions that are native to many radiative transfer models. It delivers a discrete method to circumnavigate the significant cost of time and treasure associated with hardware-based goniometric data collections.

  15. Modeling the Radar Return of Powerlines Using an Incremental Length Diffraction Coefficient Approach

    NASA Astrophysics Data System (ADS)

    Macdonald, Douglas

    A method for modeling the signal from cables and powerlines in Synthetic Aperture Radar (SAR) imagery is presented. Powerline detection using radar is an active area of research. Accurately identifing the location of powerlines in a scene can be used to aid pilots of low flying aircraft in collision avoidance, or map the electrical infrastructure of an area. The focus of this research was on the forward modeling problem of generating the powerline SAR signal from first principles. Previous work on simulating SAR imagery involved methods that ranged from efficient but insufficiently accurate, depending on the application, to more exact but computationally complex. A brief survey of the numerous ways to model the scattering of electromagnetic radiation is provided. A popular tool that uses the geometric optics approximation for modeling imagery for remote sensing applications across a wide range of modalities is the Digitial Imaging and Remote Sensing Image Generation (DIRSIG) tool. This research shows the way in which DIRSIG generates the SAR phase history is unique compared to other methods used. In particular, DIRSIG uses the geometric optics approximation for the scattering of electromagnetic radiation and builds the phase history in the time domain on a pulse-by-pulse basis. This enables an efficient generation of the phase history of complex scenes. The drawback to this method is the inability to account for diffraction. Since the characteristic diameter of many communication cables and powerlines is on the order of the wavelength of the incident radiation, diffraction is the dominant mechanism by which the radiation gets scattered for these targets. Comparison of DIRSIG imagery to field data shows good scene-wide qualitative agreement as well as Rayleigh distributed noise in the amplitude data, as expected for coherent imaging with speckle. A closer inspection of the Radar Cross Sections of canonical targets such as trihedrals and dihedrals, however, shows DIRSIG consistently underestimated the scattered return, especially away from specular observation angles. This underestimation was particularly pronounced for the dihedral targets which have a low acceptance angle in elevation, probably caused by the lack of a physical optics capability in DIRSIG. Powerlines were not apparent in the simulated data. For modeling powerlines outside of DIRSIG using a standalone approach, an Incremental Length Diffraction Coefficient (ILDC) method was used. Traditionally, this method is used to model the scattered radiation from the edge of a wedge, for example the edges on the wings of a stealth aircraft. The Physical Theory of Diffraction provides the 2D diffraction coefficient and the ILDC method performs an integral along the edge to extend this solution to three dimensions. This research takes the ILDC approach but instead of using the wedge diffraction coefficient, the exact far-field diffraction coefficient for scattering from a finite length cylinder is used. Wavenumber-diameter products are limited to less than or about 10. For typical powerline diameters, this translates to X-band frequencies and lower. The advantage of this method is it allows exact 2D solutions to be extended to powerline geometries where sag is present and it is shown to be more accurate than a pure physical optics approach for frequencies lower than millimeter wave. The Radar Cross Sections produced by this method were accurate to within the experimental uncertainty of measured RF anechoic chamber data for both X and C-band frequencies across an 80 degree arc for 5 different target types and diameters. For the X-band data, the mean error was 6.0% for data with 9.5% measurement uncertainty. For the C-band data, the mean error was 11.8% for data with 14.3% measurement uncertainty. The best results were obtained for X-band data in the HH polarization channel within a 20 degree arc about normal incidence. For this configuration, a mean error of 3.0% for data with a measurement uncertainty of 5.2% was obtained. The least accurate results were obtained for X-band data in the VV polarization channel within a 20 degree arc about normal incidence. For this configuration, a mean error of 8.9% for data with a measurement uncertainty of 5.9% was obtained. This error likely arose from making the smooth cylinder assumption, which neglects the semi-open waveguide TE contribution from the grooves in the helically wound powerline. For field data in an actual X-band circular SAR collection, a mean error of 3.3% for data with a measurement uncertainty of 3.3% was obtained in the HH channel. For the VV channel, a mean error of 9.9% was obtained for data with a measurement uncertainty of 3.4%. Future work for improving this method would likely entail adding a far-field semi-open waveguide contribution to the 2D diffraction coefficient for TE polarized radiation. Accounting for second order diffractions between closely spaced powerlines would also lead to improved accuracy for simulated field data.

  16. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  17. Simulation of Image Performance Characteristics of the Landsat Data Continuity Mission (LDCM) Thermal Infrared Sensor (TIRS)

    NASA Technical Reports Server (NTRS)

    Schott, John; Gerace, Aaron; Brown, Scott; Gartley, Michael; Montanaro, Matthew; Reuter, Dennis C.

    2012-01-01

    The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic "on-orbit" TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable.

  18. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  19. Hyperspectral monitoring of chemically sensitive plant sentinels

    NASA Astrophysics Data System (ADS)

    Simmons, Danielle A.; Kerekes, John P.; Raqueno, Nina G.

    2009-08-01

    Automated detection of chemical threats is essential for an early warning of a potential attack. Harnessing plants as bio-sensors allows for distributed sensing without a power supply. Monitoring the bio-sensors requires a specifically tailored hyperspectral system. Tobacco plants have been genetically engineered to de-green when a material of interest (e.g. zinc, TNT) is introduced to their immediate vicinity. The reflectance spectra of the bio-sensors must be accurately characterized during the de-greening process for them to play a role in an effective warning system. Hyperspectral data have been collected under laboratory conditions to determine the key regions in the reflectance spectra associated with the degreening phenomenon. Bio-sensor plants and control (nongenetically engineered) plants were exposed to TNT over the course of two days and their spectra were measured every six hours. Rochester Institute of Technologys Digital Imaging and Remote Sensing Image Generation Model (DIRSIG) was used to simulate detection of de-greened plants in the field. The simulated scene contains a brick school building, sidewalks, trees and the bio-sensors placed at the entrances to the buildings. Trade studies of the bio-sensor monitoring system were also conducted using DIRSIG simulations. System performance was studied as a function of field of view, pixel size, illumination conditions, radiometric noise, spectral waveband dependence and spectral resolution. Preliminary results show that the most significant change in reflectance during the degreening period occurs in the near infrared region.

  20. Visible and thermal spectrum synthetic image generation with DIRSIG and MuSES for ground vehicle identification training

    NASA Astrophysics Data System (ADS)

    May, Christopher M.; Maurer, Tana O.; Sanders, Jeffrey S.

    2017-05-01

    There is a ubiquitous and never ending need in the US armed forces for training materials that provide the warfighter with the skills needed to differentiate between friendly and enemy forces on the battlefield. The current state of the art in battlefield identification training is the Recognition of Combat Vehicles (ROCV) tool created and maintained by the Communications - Electronics Research, Development and Engineering Center Night Vision and Electronic Sensors Directorate (CERDEC NVESD). The ROC-V training package utilizes measured visual and thermal imagery to train soldiers about the critical visual and thermal cues needed to accurately identify modern military vehicles and combatants. This paper presents an approach that has been developed to augment the existing ROC-V imagery database with synthetically generated multi-spectral imagery that will allow NVESD to provide improved training imagery at significantly lower costs.

  1. Towards an improved LAI collection protocol via simulated field-based PAR sensing

    DOE PAGES

    Yao, Wei; Van Leeuwen, Martin; Romanczyk, Paul; ...

    2016-07-14

    In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While photosynthetically-active radiation (PAR) sensors have been validated for measuring crop LAI, there is limited literature on the efficacy of PAR-based LAI measurement in the forest environment. This study (i) validates PAR-based LAI measurement in forest environments, and (ii) proposes a suitable collection protocol, which balances efficiency with measurement variation, e.g., due to sun flecks and various-sized canopymore » gaps. A synthetic PAR sensor model was developed in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and used to validate LAI measurement based on first-principles and explicitly-known leaf geometry. Simulated collection parameters were adjusted to empirically identify optimal collection protocols. Furthermore, these collection protocols were then validated in the field by correlating PAR-based LAI measurement to the normalized difference vegetation index (NDVI) extracted from the “classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) data (R 2 was 0.61). The results indicate that our proposed collecting protocol is suitable for measuring the LAI of sparse forest (LAI < 3–5 ( m 2/m 2)).« less

  2. Polarimetric Enhancements to Electro-Optical Aided Navigation Techniques

    DTIC Science & Technology

    2011-03-01

    encouraging me in every step of the way. I give a sincere thanks to the folks in AFRL/RYJT that started me working with polaremetry. Especially Bab Mack ...57 3.14 Polarization Products Examples . . . . . . . . . . . . . . . . . 60 ix Figure Page 3.15 Hue, Intensity, Saturation Pseudo- color ...Figure 3.6: Example output image from the DIRSIG software. This image shows the intensity of three glossy black objects being illuminated by the sun

  3. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  4. Voxel-Based LIDAR Analysis and Applications

    NASA Astrophysics Data System (ADS)

    Hagstrom, Shea T.

    One of the greatest recent changes in the field of remote sensing is the addition of high-quality Light Detection and Ranging (LIDAR) instruments. In particular, the past few decades have been greatly beneficial to these systems because of increases in data collection speed and accuracy, as well as a reduction in the costs of components. These improvements allow modern airborne instruments to resolve sub-meter details, making them ideal for a wide variety of applications. Because LIDAR uses active illumination to capture 3D information, its output is fundamentally different from other modalities. Despite this difference, LIDAR datasets are often processed using methods appropriate for 2D images and that do not take advantage of its primary virtue of 3-dimensional data. It is this problem we explore by using volumetric voxel modeling. Voxel-based analysis has been used in many applications, especially medical imaging, but rarely in traditional remote sensing. In part this is because the memory requirements are substantial when handling large areas, but with modern computing and storage this is no longer a significant impediment. Our reason for using voxels to model scenes from LIDAR data is that there are several advantages over standard triangle-based models, including better handling of overlapping surfaces and complex shapes. We show how incorporating system position information from early in the LIDAR point cloud generation process allows radiometrically-correct transmission and other novel voxel properties to be recovered. This voxelization technique is validated on simulated data using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, a first-principles based ray-tracer developed at the Rochester Institute of Technology. Voxel-based modeling of LIDAR can be useful on its own, but we believe its primary advantage is when applied to problems where simpler surface-based 3D models conflict with the requirement of realistic geometry. To show the voxel model's advantage, we apply it to several outstanding problems in remote sensing: LIDAR quality metrics, line-of-sight mapping, and multi-model fusion. Each of these applications is derived, validated, and examined in detail, and our results compared with other state-of-the-art methods. In most cases the voxel-based methods demonstrate superior results and are able to derive information not available to existing methods. Realizing these improvements requires only a shift away from traditional 3D model generation, and our results give a small indicator of what is possible. Many examples of possible areas for future improvement and expansion of algorithms beyond the scope of our work are also noted.

  5. Evaluation of sensor, environment and operational factors impacting the use of multiple sensor constellations for long term resource monitoring

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan

    Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.

  6. Using GIS databases for simulated nightlight imagery

    NASA Astrophysics Data System (ADS)

    Zollweg, Joshua D.; Gartley, Michael; Roskovensky, John; Mercier, Jeffery

    2012-06-01

    Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM) Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver used to model urban nightlights, given several assumptions. The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development. 2 So, the real assumption being made is that the density of nodes corresponds to development, which is reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent types of nodes produce more realistic results than others. Residential street nodes oered the best performance for any single node type, among the types tested in this investigation. Other node types, however, still provide useful supplementary data. Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a result of the comparison, correction functions were used to correct for discrepancies between simulated and observed radiance. Future work will include investigating more advanced approaches for mapping the spatial extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of a single source type.

  7. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2004-01-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  8. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2003-12-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  9. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  10. Simulating the directional, spectral and textural properties of a large-scale scene at high resolution using a MODIS BRDF product

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.

    2016-10-01

    Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.

  11. Combining high fidelity simulations and real data for improved small-footprint waveform lidar assessment of vegetation structure (Invited)

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; Wu, J.; Asner, G. P.

    2010-12-01

    Our understanding of vegetation complexity and biodiversity, from a remote sensing perspective, has evolved from 2D species diversity to also include 3D vegetation structural diversity. Attempts at using image-based approaches for structural assessment have met with reasonable success, but 3D remote sensing technologies, such as radar and light detection and ranging (lidar), are arguably more adept at sensing vegetation structure. While radar-derived structure metrics tend to break down at high biomass levels, novel waveform lidar systems present us with new opportunities for detailed and scalable structural characterization of vegetation. These sensors digitize the entire backscattered energy profile at high spatial and vertical resolutions and often at off-nadir angles. Research teams at Rochester Institute of Technology (RIT) and Carnegie Institution for Science have been using airborne data from the Carnegie Airborne Observatory (CAO) to assess vegetation structure and variation in savanna ecosystems in and around the Kruger National Park, South Africa. It quickly became evident that (i) pre-processing of small-footprint waveform data is a critical step prior to testing scientific hypotheses, (ii) a number of assumptions of how vegetation structure is expressed in these 3D signals need to be evaluated, and very importantly (iii) we need to re-evaluate our linkages between coarse in-field measurements, e.g., volume, biomass, leaf area index (LAI), and metrics derived from waveform lidar. Research has progressed to the stage where we have evaluated various pre-processing steps, e.g., convolution via the Wiener filter, Richardson-Lucy, and non-negative least squares algorithms, and the coupling of waveform voxels to tree structure in a simulation environment. This was done in the MODTRAN-based Digital Imaging and Remote Sensing Image Generation (DIRSIG) simulation environment, developed at RIT. We generated "truth" cross-section datasets of detailed virtual trees in this environment and evaluated inversion approaches to tree structure estimation. Various outgoing pulse widths, tree structures, and a noise component were included as part of the simulation effort. Results, for example, have shown that the Richardson-Lucy algorithm outperforms other approaches in terms of retrieval of known structural information, that our assumption regarding the position of the ground surface needs re-evaluation, and has shed light on herbaceous biomass and waveform interactions and the impact of outgoing pulse width on assessments. These efforts have gone a long way in providing a solid foundation for analysis and interpretation of actual waveform data from the savanna study area. We expect that newfound knowledge with respect to waveform-target interactions from these simulations will also aid efforts to reconstruct 3D trees from real data and better describe associated structural diversity. Results will be presented at the conference.

  12. Characterizing Resident Space Object Earthshine Signature Variability

    NASA Astrophysics Data System (ADS)

    Van Cor, Jared D.

    There are three major sources of illumination on objects in the near Earth space environment: Sunshine, Moonshine, and Earthshine. For objects in this environment (satellites, orbital debris, etc.) known as Resident Space Objects (RSOs), the sun and the moon have consistently small illuminating solid angles and can be treated as point sources; this makes their incident illumination easily modeled. The Earth on the other hand has a large illuminating solid angle, is heterogeneous, and is in a constant state of change. The objective of this thesis was to characterize the impact and variability of observed RSO Earthshine on apparent magnitude signatures in the visible optical spectral region. A key component of this research was creating Earth object models incorporating the reflectance properties of the Earth. Two Earth objects were created: a homogeneous diffuse Earth object and a time sensitive heterogeneous Earth object. The homogeneous diffuse Earth object has a reflectance equal to the average global albedo, a standard model used when modeling Earthshine. The time sensitive heterogeneous Earth object was created with two material maps representative of the dynamic reflectance of the surface of the earth, and a shell representative of the atmosphere. NASA's Moderate-resolution Imaging Spectroradiometer (MODIS) Earth observing satellite product libraries, MCD43C1 global surface BRDF map and MOD06 global fractional cloud map, were utilized to create the material maps, and a hybridized version of the Empirical Line Method (ELM) was used to create the atmosphere. This dynamic Earth object was validated by comparing simulated color imagery of the Earth to that taken by: NASAs Earth Polychromatic Imaging Camera (EPIC) located on the Deep Space Climate Observatory (DSCOVR), and by MODIS located on the Terra satellite. The time sensitive heterogeneous Earth object deviated from MODIS imagery by a spectral radiance root mean square error (RMSE) of +/-14.86 [watts/m. 2sr?m]over a sample of ROIs. Further analysis using EPIC imagery found a total albedo difference of +0.03% and a cross correlation of 0.656. Also compared to EPIC imagery it was found our heterogeneous Earth model produced a reflected Earthshine radiance RMSE of +/-28 [watts/m. 2sr?m] incident on diffuse sphericalRSOs, specular spherical RSOs, and diffuse flat plate RSOs with an altitude of 1000km; this resulted in an apparent magnitude error of +/-0.28. Furthermore, it was found our heterogeneous Earthmodel produced a reflected Earthshine radiance RMSE of +/-68 [watts/m. 2sr?m] for specular flat plate RSOs withan altitude of 1000km; this resulted in an apparent magnitude error of +/-0.68. The Earth objects were used in a workflow with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool to explore the impact of a range of characteristic RSO geometries, geographies, orientations, and materials on the signatures from an RSO due to Earthshine. An apparent magnitude was calculated and used to quantify RSO Earthshine signature variability; this is discussed in terms of the RMSE and maximum deviations of visible RSO Earthshine apparent magnitude signatures comparing the homogeneous Earth model to heterogeneous Earth model. The homogeneous diffuse Earth object was shown to approximate visible RSO Earthshine apparent magnitude signatures from spheres with a RMSE in reflected Earthshine apparent magnitude of +/-0.4 and a maximum apparent magnitude difference of 1.09 when compared to the heterogeneous Earth model. Similarly for diffuse flat plates, the visible RSO Earthshine apparent magnitude signature RMSE was shown to be +/-0.64, with a maximum apparent magnitude difference of 0.82. For specular flat plates, the visible RSO Earthshine apparent magnitude signature RMSE was shown to be +/-0.97 with maximum apparent magnitude difference of 2.26. This thesis explored only a portion of the parameter dependencies of Earth shine, but has enabled a preliminary understanding of visible RSO Earthshine signature variability and its geometric dependence. This research has demonstrated the impact of Earth heterogeneity on the observed apparent magnitude signatures of RSOs illuminated by Earthshine and the potential for error that comes with approximating the Earth as a diffuse homogeneous object.

  13. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  14. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  15. A method of computer aided design with self-generative models in NX Siemens environment

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.

    2015-11-01

    Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish which method of self-generative preparation is most preferred. It always depends on a problem complexity. The easiest way for such a model preparation is this with the parametric relations model whilst the hardest one is this with the OPEN API mechanism. From knowledge processing point of view the best choice is application of the knowledge fusion.

  16. Generative model selection using a scalable and size-independent complex network classifier

    NASA Astrophysics Data System (ADS)

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-12-01

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks," outperforms existing methods with respect to accuracy, scalability, and size-independence.

  17. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    PubMed

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  18. Generative model selection using a scalable and size-independent complex network classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motallebi, Sadegh, E-mail: motallebi@ce.sharif.edu; Aliakbary, Sadegh, E-mail: aliakbary@ce.sharif.edu; Habibi, Jafar, E-mail: jhabibi@sharif.edu

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree formore » model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.« less

  19. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  20. FlexibleSUSY-A spectrum generator generator for supersymmetric models

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Park, Jae-hyeon; Stöckinger, Dominik; Voigt, Alexander

    2015-05-01

    We introduce FlexibleSUSY, a Mathematica and C++ package, which generates a fast, precise C++ spectrum generator for any SUSY model specified by the user. The generated code is designed with both speed and modularity in mind, making it easy to adapt and extend with new features. The model is specified by supplying the superpotential, gauge structure and particle content in a SARAH model file; specific boundary conditions e.g. at the GUT, weak or intermediate scales are defined in a separate FlexibleSUSY model file. From these model files, FlexibleSUSY generates C++ code for self-energies, tadpole corrections, renormalization group equations (RGEs) and electroweak symmetry breaking (EWSB) conditions and combines them with numerical routines for solving the RGEs and EWSB conditions simultaneously. The resulting spectrum generator is then able to solve for the spectrum of the model, including loop-corrected pole masses, consistent with user specified boundary conditions. The modular structure of the generated code allows for individual components to be replaced with an alternative if available. FlexibleSUSY has been carefully designed to grow as alternative solvers and calculators are added. Predefined models include the MSSM, NMSSM, E6SSM, USSM, R-symmetric models and models with right-handed neutrinos.

  1. Short-Term Energy Outlook Model Documentation: Electricity Generation and Fuel Consumption Models

    EIA Publications

    2014-01-01

    The electricity generation and fuel consumption models of the Short-Term Energy Outlook (STEO) model provide forecasts of electricity generation from various types of energy sources and forecasts of the quantities of fossil fuels consumed for power generation. The structure of the electricity industry and the behavior of power generators varies between different areas of the United States. In order to capture these differences, the STEO electricity supply and fuel consumption models are designed to provide forecasts for the four primary Census regions.

  2. Electricity generation and transmission planning in deregulated power markets

    NASA Astrophysics Data System (ADS)

    He, Yang

    This dissertation addresses the long-term planning of power generation and transmission facilities in a deregulated power market. Three models with increasing complexities are developed, primarily for investment decisions in generation and transmission capacity. The models are presented in a two-stage decision context where generation and transmission capacity expansion decisions are made in the first stage, while power generation and transmission service fees are decided in the second stage. Uncertainties that exist in the second stage affect the capacity expansion decisions in the first stage. The first model assumes that the electric power market is not constrained by transmission capacity limit. The second model, which includes transmission constraints, considers the interactions between generation firms and the transmission network operator. The third model assumes that the generation and transmission sectors make capacity investment decisions separately. These models result in Nash-Cournot equilibrium among the unregulated generation firms, while the regulated transmission network operator supports the competition among generation firms. Several issues in the deregulated electric power market can be studied with these models such as market powers of generation firms and transmission network operator, uncertainties of the future market, and interactions between the generation and transmission sectors. Results deduced from the developed models include (a) regulated transmission network operator will not reserve transmission capacity to gain extra profits; instead, it will make capacity expansion decisions to support the competition in the generation sector; (b) generation firms will provide more power supplies when there is more demand; (c) in the presence of future uncertainties, the generation firms will add more generation capacity if the demand in the future power market is expected to be higher; and (d) the transmission capacity invested by the transmission network operator depends on the characteristic of the power market and the topology of the transmission network. Also, the second model, which considers interactions between generation and transmission sectors, yields higher social welfare in the electric power market, than the third model where generation firms and transmission network operator make investment decisions separately.

  3. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  4. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  5. A Research on the Generative Learning Model Supported by Context-Based Learning

    ERIC Educational Resources Information Center

    Ulusoy, Fatma Merve; Onen, Aysem Seda

    2014-01-01

    This study is based on the generative learning model which involves context-based learning. Using the generative learning model, we taught the topic of Halogens. This topic is covered in the grade 10 chemistry curriculum using activities which are designed in accordance with the generative learning model supported by context-based learning. The…

  6. Developing models for the prediction of hospital healthcare waste generation rate.

    PubMed

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.

  7. Modular Analysis of Automobile Exhaust Thermoelectric Power Generation System

    NASA Astrophysics Data System (ADS)

    Deng, Y. D.; Zhang, Y.; Su, C. Q.

    2015-06-01

    In this paper, an automobile exhaust thermoelectric power generation system is packaged into a model with its own operating principles. The inputs are the engine speed and power, and the output is the power generated by the system. The model is divided into two submodels. One is the inlet temperature submodel, and the other is the power generation submodel. An experimental data modeling method is adopted to construct the inlet temperature submodel, and a theoretical modeling method is adopted to construct the power generation submodel. After modeling, simulation is conducted under various engine operating conditions to determine the variation of the power generated by the system. Finally, the model is embedded into a Honda Insight vehicle model to explore the energy-saving effect of the system on the vehicle under Economic Commission for Europe and cyc-constant_60 driving cycles.

  8. Mid-infrared rogue wave generation in chalcogenide fibers

    NASA Astrophysics Data System (ADS)

    Liu, Lai; Nagasaka, Kenshiro; Suzuki, Takenobu; Ohishi, Yasutake

    2017-02-01

    The supercontinuum generation and rogue wave generation in a step-index chalcogenide fiber are numerically investigated by solving the generalized nonlinear Schrödinger equation. Two noise models have been used to model the noise of the pump laser pulses to investigate the consistency of the noise modeling in rogue wave generation. First noise model is 0.1% amplitude noise which has been used in the report of rogue wave generation. Second noise model is the widely used one-photon-per-mode-noise and phase diffusion-noise. The results show that these two commonly used noise models have a good consistency in the simulations of rogue wave generation. The results also show that if the pump laser pulses carry more noise, the chance of a rogue wave with a high peak power becomes higher. This is harmful to the SC generation by using picosecond lasers in the chalcogenide fibers.

  9. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  10. An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2005-01-01

    An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.

  11. Preserving Differential Privacy in Degree-Correlation based Graph Generation

    PubMed Central

    Wang, Yue; Wu, Xintao

    2014-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987

  12. Mathematical modeling to predict residential solid waste generation.

    PubMed

    Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de

    2008-01-01

    One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.

  13. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  14. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  15. Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos

    NASA Astrophysics Data System (ADS)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.

  16. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  17. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    PubMed

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  18. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    NASA Astrophysics Data System (ADS)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  19. Evaluation of Generation Alternation Models in Evolutionary Robotics

    NASA Astrophysics Data System (ADS)

    Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.

  20. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  1. Generative models for network neuroscience: prospects and promise

    PubMed Central

    Betzel, Richard F.

    2017-01-01

    Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and identifying principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modelling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here, we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, and utility in intuiting mechanisms, followed by a short history on their use in network science, broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including Caenorhabditis elegans, Drosophila, mouse, rat, cat, macaque and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modelling approach for network neuroscience. PMID:29187640

  2. Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Mizginov, V. A.

    2018-05-01

    Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.

  3. Graphical Modeling of Shipboard Electric Power Distribution Systems

    DTIC Science & Technology

    1993-12-01

    examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and the...frequency and voltage output of the machine. This load is then connected to the synchronous generator and two different scenarios are examined including a...examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and tht

  4. A universal algorithm for an improved finite element mesh generation Mesh quality assessment in comparison to former automated mesh-generators and an analytic model.

    PubMed

    Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid

    2005-06-01

    The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.

  5. An empirical model for prediction of household solid waste generation rate - A case study of Dhanbad, India.

    PubMed

    Kumar, Atul; Samadder, S R

    2017-10-01

    Accurate prediction of the quantity of household solid waste generation is very much essential for effective management of municipal solid waste (MSW). In actual practice, modelling methods are often found useful for precise prediction of MSW generation rate. In this study, two models have been proposed that established the relationships between the household solid waste generation rate and the socioeconomic parameters, such as household size, total family income, education, occupation and fuel used in the kitchen. Multiple linear regression technique was applied to develop the two models, one for the prediction of biodegradable MSW generation rate and the other for non-biodegradable MSW generation rate for individual households of the city Dhanbad, India. The results of the two models showed that the coefficient of determinations (R 2 ) were 0.782 for biodegradable waste generation rate and 0.676 for non-biodegradable waste generation rate using the selected independent variables. The accuracy tests of the developed models showed convincing results, as the predicted values were very close to the observed values. Validation of the developed models with a new set of data indicated a good fit for actual prediction purpose with predicted R 2 values of 0.76 and 0.64 for biodegradable and non-biodegradable MSW generation rate respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Generative Modeling for Machine Learning on the D-Wave

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thulasidasan, Sunil

    These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.

  7. Modeling and Simulation of U-tube Steam Generator

    NASA Astrophysics Data System (ADS)

    Zhang, Mingming; Fu, Zhongguang; Li, Jinyao; Wang, Mingfei

    2018-03-01

    The U-tube natural circulation steam generator was mainly researched with modeling and simulation in this article. The research is based on simuworks system simulation software platform. By analyzing the structural characteristics and the operating principle of U-tube steam generator, there are 14 control volumes in the model, including primary side, secondary side, down channel and steam plenum, etc. The model depends completely on conservation laws, and it is applied to make some simulation tests. The results show that the model is capable of simulating properly the dynamic response of U-tube steam generator.

  8. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  9. Generative electronic background music system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazurowski, Lukasz

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  10. Nodal network generator for CAVE3

    NASA Technical Reports Server (NTRS)

    Palmieri, J. V.; Rathjen, K. A.

    1982-01-01

    A new extension of CAVE3 code was developed that automates the creation of a finite difference math model in digital form ready for input to the CAVE3 code. The new software, Nodal Network Generator, is broken into two segments. One segment generates the model geometry using a Tektronix Tablet Digitizer and the other generates the actual finite difference model and allows for graphic verification using Tektronix 4014 Graphic Scope. Use of the Nodal Network Generator is described.

  11. Self-Generated Analogical Models of Respiratory Pathways

    ERIC Educational Resources Information Center

    Lee, Yeung Chung

    2015-01-01

    Self-generated analogical models have emerged recently as alternatives to teacher-supplied analogies and seem to have good potential to promote deep learning and scientific thinking. However, studies of the ways and contexts in which students generate these models are still too limited to allow a fuller appraisal of these models' effectiveness in…

  12. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  13. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  14. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  15. Aspects of Mathematical Modelling of Pressure Retarded Osmosis

    PubMed Central

    Anissimov, Yuri G.

    2016-01-01

    In power generating terms, a pressure retarded osmosis (PRO) energy generating plant, on a river entering a sea or ocean, is equivalent to a hydroelectric dam with a height of about 60 meters. Therefore, PRO can add significantly to existing renewable power generation capacity if economical constrains of the method are resolved. PRO energy generation relies on a semipermeable membrane that is permeable to water and impermeable to salt. Mathematical modelling plays an important part in understanding flows of water and salt near and across semipermeable membranes and helps to optimize PRO energy generation. Therefore, the modelling can help realizing PRO energy generation potential. In this work, a few aspects of mathematical modelling of the PRO process are reviewed and discussed. PMID:26848696

  16. Simulation for Wind Turbine Generators -- With FAST and MATLAB-Simulink Modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Muljadi, E.; Jonkman, J.

    This report presents the work done to develop generator and gearbox models in the Matrix Laboratory (MATLAB) environment and couple them to the National Renewable Energy Laboratory's Fatigue, Aerodynamics, Structures, and Turbulence (FAST) program. The goal of this project was to interface the superior aerodynamic and mechanical models of FAST to the excellent electrical generator models found in various Simulink libraries and applications. The scope was limited to Type 1, Type 2, and Type 3 generators and fairly basic gear-train models. Future work will include models of Type 4 generators and more-advanced gear-train models with increased degrees of freedom. Asmore » described in this study, implementation of the developed drivetrain model enables the software tool to be used in many ways. Several case studies are presented as examples of the many types of studies that can be performed using this tool.« less

  17. High-Fidelity Roadway Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  18. Program Helps Generate Boundary-Element Mathematical Models

    NASA Technical Reports Server (NTRS)

    Goldberg, R. K.

    1995-01-01

    Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).

  19. GeneratorSE: A Sizing Tool for Variable-Speed Wind Turbine Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Dykes, Katherine L

    This report documents a set of analytical models employed by the optimization algorithms within the GeneratorSE framework. The initial values and boundary conditions employed for the generation of the various designs and initial estimates for basic design dimensions, masses, and efficiency for the four different models of generators are presented and compared with empirical data collected from previous studies and some existing commercial turbines. These models include designs applicable for variable-speed, high-torque application featuring direct-drive synchronous generators and low-torque application featuring induction generators. In all of the four models presented, the main focus of optimization is electromagnetic design with themore » exception of permanent-magnet and wire-wound synchronous generators, wherein the structural design is also optimized. Thermal design is accommodated in GeneratorSE as a secondary attribute by limiting the winding current densities to acceptable limits. A preliminary validation of electromagnetic design was carried out by comparing the optimized magnetic loading against those predicted by numerical simulation in FEMM4.2, a finite-element software for analyzing electromagnetic and thermal physics problems for electrical machines. For direct-drive synchronous generators, the analytical models for the structural design are validated by static structural analysis in ANSYS.« less

  20. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  1. GAMETES: a fast, direct algorithm for generating pure, strict, epistatic models with random architectures.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H

    2012-10-01

    Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.

  2. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  3. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2015-09-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data; and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than constrained multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which "ageing" reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least three times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these "hybrid" multi-generational schemes should be used with great caution in regional models.

  4. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model, resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low-volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which ageing reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least 3 times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these hybrid multi-generational schemes should be used with great caution in regional models.

  5. Stochastic Generation of Monthly Rainfall Data

    NASA Astrophysics Data System (ADS)

    Srikanthan, R.

    2009-03-01

    Monthly rainfall data is generally needed in the simulation of water resources systems, and in the estimation of water yield from large catchments. Monthly streamflow data generation models are usually applied to generate monthly rainfall data, but this presents problems for most regions, which have significant months of no rainfall. In an earlier study, Srikanthan et al. (J. Hydrol. Eng., ASCE 11(3) (2006) 222-229) recommended the modified method of fragments to disaggregate the annual rainfall data generated by a first-order autoregressive model. The main drawback of this approach is the occurrence of similar patterns when only a short length of historic data is available. Porter and Pink (Hydrol. Water Res. Symp. (1991) 187-191) used synthetic fragments from a Thomas-Fiering monthly model to overcome this drawback. As an alternative, a new two-part monthly model is nested in an annual model to generate monthly rainfall data which preserves both the monthly and annual characteristics. This nested model was applied to generate rainfall data from seven rainfall stations located in eastern and southern parts of Australia, and the results showed that the model performed satisfactorily.

  6. Stochastic Hourly Weather Generator HOWGH: Validation and its Use in Pest Modelling under Present and Future Climates

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Hirschi, M.; Spirig, C.

    2014-12-01

    To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).

  7. Positive and negative generation effects in source monitoring.

    PubMed

    Riefer, David M; Chien, Yuchin; Reimer, Jason F

    2007-10-01

    Research is mixed as to whether self-generation improves memory for the source of information. We propose the hypothesis that positive generation effects (better source memory for self-generated information) occur in reality-monitoring paradigms, while negative generation effects (better source memory for externally presented information) tend to occur in external source-monitoring paradigms. This hypothesis was tested in an experiment in which participants read or generated words, followed by a memory test for the source of each word (read or generated) and the word's colour. Meiser and Bröder's (2002) multinomial model for crossed source dimensions was used to analyse the data, showing that source memory for generation (reality monitoring) was superior for the generated words, while source memory for word colour (external source monitoring) was superior for the read words. The model also revealed the influence of strong response biases in the data, demonstrating the usefulness of formal modelling when examining generation effects in source monitoring.

  8. A model of oil-generation in a waterlogged and closed system

    NASA Astrophysics Data System (ADS)

    Zhigao, He

    This paper presents a new model on synthetic effects on oil-generation in a waterlogged and closed system. It is suggested based on information about oil in high pressure layers (including gas dissolved in oil), marsh gas and its fermentative solution, fermentation processes and mechanisms, gaseous hydrocarbons of carbonate rocks by acid treatment, oil-field water, recent and ancient sediments, and simulation experiments of artificial marsh gas and biological action. The model differs completely from the theory of oil-generation by thermal degradation of kerogen but stresses the synthetic effects of oil-generation in special waterlogged and closed geological systems, the importance of pressure in oil-forming processes, and direct oil generation by micro-organisms. Oil generated directly by micro-organisms is a particular biochemical reaction. Another feature of this model is that generation, migration and accumulation of petroleum are considered as a whole.

  9. The Rack-Gear Tool Generation Modelling. Non-Analytical Method Developed in CATIA, Using the Relative Generating Trajectories Method

    NASA Astrophysics Data System (ADS)

    Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.

    2016-11-01

    The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.

  10. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  11. Multi-Scale Human Respiratory System Simulations to Study Health Effects of Aging, Disease, and Inhaled Substances

    NASA Astrophysics Data System (ADS)

    Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres

    2006-11-01

    Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.

  12. Overpressures in the Uinta Basin, Utah: Analysis using a three-dimensional basin evolution model

    NASA Astrophysics Data System (ADS)

    McPherson, Brian J. O. L.; Bredehoeft, John D.

    2001-04-01

    High pore fluid pressures, approaching lithostatic, are observed in the deepest sections of the Uinta basin, Utah. Geologic observations and previous modeling studies suggest that the most likely cause of observed overpressures is hydrocarbon generation. We studied Uinta overpressures by developing and applying a three-dimensional, numerical model of the evolution of the basin. The model was developed from a public domain computer code, with addition of a new mesh generator that builds the basin through time, coupling the structural, thermal, and hydrodynamic evolution. Also included in the model are in situ hydrocarbon generation and multiphase migration. The modeling study affirmed oil generation as an overpressure mechanism, but also elucidated the relative roles of multiphase fluid interaction, oil density and viscosity, and sedimentary compaction. An important result is that overpressures by oil generation create conditions for rock fracturing, and associated fracture permeability may regulate or control the propensity to maintain overpressures.

  13. Leading Generative Groups: A Conceptual Model

    ERIC Educational Resources Information Center

    London, Manuel; Sobel-Lojeski, Karen A.; Reilly, Richard R.

    2012-01-01

    This article presents a conceptual model of leadership in generative groups. Generative groups have diverse team members who are expected to develop innovative solutions to complex, unstructured problems. The challenge for leaders of generative groups is to balance (a) establishing shared goals with recognizing members' vested interests, (b)…

  14. Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André

    2016-01-01

    Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…

  15. Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul

    NASA Astrophysics Data System (ADS)

    Buyuksalih, I.; Isikdag, U.; Zlatanova, S.

    2013-08-01

    3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.

  16. High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation

    NASA Technical Reports Server (NTRS)

    Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.

    1999-01-01

    The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.

  17. The application of a generativity model for older adults.

    PubMed

    Ehlman, Katie; Ligon, Mary

    2012-01-01

    Generativity is a concept first introduced by Erik Erikson as a part of his psychosocial theory which outlines eight stages of development in the human life. Generativity versus stagnation is the main developmental concern of middle adulthood; however, generativity is also recognized as an important theme in the lives of older adults. Building on the work of Erikson, McAdams and de St. Aubin (1992) developed a model explaining the generative process. The aims of this article are: (a) to explore the relationship between generativity and older adults as it appears in research literature; and (b) to examine McAdam's model and use it to explain the role of generativity in older adults who share life stories with gerontology students through an oral history project.

  18. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  19. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  20. Carbon deposition model for oxygen-hydrocarbon combustion. Task 6: Data analysis and formulation of an empirical model

    NASA Technical Reports Server (NTRS)

    Makel, Darby B.; Rosenberg, Sanders D.

    1990-01-01

    The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.

  1. Towards a Model of Human Resource Solutions for Achieving Intergenerational Interaction in Organisations

    ERIC Educational Resources Information Center

    McGuire, David; By, Rune Todnem; Hutchings, Kate

    2007-01-01

    Purpose: Achieving intergenerational interaction and avoiding conflict is becoming increasingly difficult in a workplace populated by three generations--Baby Boomers, Generation X-ers and Generation Y-ers. This paper presents a model and proposes HR solutions towards achieving co-operative generational interaction. Design/methodology/approach:…

  2. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  3. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  4. Observation and Modeling of Tsunami-Generated Gravity Waves in the Earth’s Upper Atmosphere

    DTIC Science & Technology

    2015-10-08

    Observation and modeling of tsunami -generated gravity waves in the earth’s upper atmosphere 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...ABSTRACT Build a compatible set of models which 1) calculate the spectrum of atmospheric GWs excited by a tsunami (using ocean model data as input...for public release; distribution is unlimited. Observation and modeling of tsunami -generated gravity waves in the earth’s upper atmosphere Sharon

  5. Free-piston engine linear generator for hybrid vehicles modeling study

    NASA Astrophysics Data System (ADS)

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  6. Learning a generative model of images by factoring appearance and shape.

    PubMed

    Le Roux, Nicolas; Heess, Nicolas; Shotton, Jamie; Winn, John

    2011-03-01

    Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

  7. Model and Scenario Variations in Predicted Number of Generations of Spodoptera litura Fab. on Peanut during Future Climate Change Scenario

    PubMed Central

    Srinivasa Rao, Mathukumalli; Swathi, Pettem; Rama Rao, Chitiprolu Anantha; Rao, K. V.; Raju, B. M. K.; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka

    2015-01-01

    The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)—2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1–2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18–22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods. PMID:25671564

  8. Parametric vs. non-parametric daily weather generator: validation and comparison

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin

    2016-04-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database.

  9. River Devices to Recover Energy with Advanced Materials (River DREAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMahon, Daniel P.

    2013-07-03

    The purpose of this project is to develop a generator called a Galloping Hydroelectric Energy Extraction Device (GHEED). It uses a galloping prism to convert water flow into linear motion. This motion is converted into electricity via a dielectric elastomer generator (DEG). The galloping mechanism and the DEG are combined to create a system to effectively generate electricity. This project has three research objectives: 1. Oscillator development and design a. Characterize galloping behavior, evaluate control surface shape change on oscillator performance and demonstrate shape change with water flow change. 2. Dielectric Energy Generator (DEG) characterization and modeling a. Characterize andmore » model the performance of the DEG based on oscillator design 3. Galloping Hydroelectric Energy Extraction Device (GHEED) system modeling and integration a. Create numerical models for construction of a system performance model and define operating capabilities for this approach Accomplishing these three objectives will result in the creation of a model that can be used to fully define the operating parameters and performance capabilities of a generator based on the GHEED design. This information will be used in the next phase of product development, the creation of an integrated laboratory scale generator to confirm model predictions.« less

  10. Model Based Document and Report Generation for Systems Engineering

    NASA Technical Reports Server (NTRS)

    Delp, Christopher; Lam, Doris; Fosse, Elyse; Lee, Cin-Young

    2013-01-01

    As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.

  11. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  12. Model based document and report generation for systems engineering

    NASA Astrophysics Data System (ADS)

    Delp, C.; Lam, D.; Fosse, E.; Lee, Cin-Young

    As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.

  13. A Point Rainfall Generator With Internal Storm Structure

    NASA Astrophysics Data System (ADS)

    Marien, J. L.; Vandewiele, G. L.

    1986-04-01

    A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.

  14. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  15. Timescales and Mechanisms of Sigh-Like Bursting and Spiking in Models of Rhythmic Respiratory Neurons.

    PubMed

    Wang, Yangyang; Rubin, Jonathan E

    2017-12-01

    Neural networks generate a variety of rhythmic activity patterns, often involving different timescales. One example arises in the respiratory network in the pre-Bötzinger complex of the mammalian brainstem, which can generate the eupneic rhythm associated with normal respiration as well as recurrent low-frequency, large-amplitude bursts associated with sighing. Two competing hypotheses have been proposed to explain sigh generation: the recruitment of a neuronal population distinct from the eupneic rhythm-generating subpopulation or the reconfiguration of activity within a single population. Here, we consider two recent computational models, one of which represents each of the hypotheses. We use methods of dynamical systems theory, such as fast-slow decomposition, averaging, and bifurcation analysis, to understand the multiple-timescale mechanisms underlying sigh generation in each model. In the course of our analysis, we discover that a third timescale is required to generate sighs in both models. Furthermore, we identify the similarities of the underlying mechanisms in the two models and the aspects in which they differ.

  16. Modeling strategic competition in hydro-thermal electricity generation markets with cascaded reservoir-hydroelectric generation plants

    NASA Astrophysics Data System (ADS)

    Uluca, Basak

    This dissertation aims to achieve two goals. The first is to model the strategic interactions of firms that own cascaded reservoir-hydro plants in oligopolistic and mixed oligopolistic hydrothermal electricity generation markets. Although competition in thermal generation has been extensively modeled since the beginning of deregulation, the literature on competition in hydro generation is still limited; in particular, equilibrium models of oligopoly that study the competitive behavior of firms that own reservoir-hydro plants along the same river in hydrothermal electricity generation markets are still under development. In competitive markets, when the reservoirs are located along the same river, the water released from an upstream reservoir for electricity generation becomes input to the immediate downstream reservoir, which may be owned by a competitor, for current or future use. To capture the strategic interactions among firms with cascaded reservoir-hydro plants, the Upstream-Conjecture approach is proposed. Under the Upstream-Conjecture approach, a firm with an upstream reservoir-hydro plant assumes that firms with downstream reservoir-hydro plants will respond to changes in the upstream firm's water release by adjusting their water release by the same amount. The results of the Upstream Conjecture experiments indicate that firms that own upstream reservoirs in a cascade may have incentive to withhold or limit hydro generation, forcing a reduction in the utilization of the downstream hydro generation plants that are owned by competitors. Introducing competition to hydroelectricity generation markets is challenging and ownership allocation of the previously state-owned cascaded reservoir-hydro plants through privatization can have significant impact on the competitiveness of the generation market. The second goal of the dissertation is to extract empirical guidance about best policy choices for the ownership of the state-owned generation plants, including the cascaded reservoir-hydro plants. Specifically, an equilibrium model of oligopoly, where only private firms compete for electricity supply is proposed. Since some electricity generation markets are better characterized as mixed oligopolies, where the public firm coexists with the private firms for electricity supply, and not as oligopolies, another equilibrium model of mixed oligopoly is proposed. The proposed mixed oligopoly equilibrium model is the first implementation of such market structure in electricity markets. The mathematical models developed in this research are applied to the simplified representation of the Turkish electricity generation market to investigate the impact of various ownership allocation scenarios that may result from the privatization of the state owned generation plants, including the cascaded reservoir-hydro plants, on the competitive market outcomes.

  17. Modeling Tsunami Wave Generation Using a Two-layer Granular Landslide Model

    NASA Astrophysics Data System (ADS)

    Ma, G.; Kirby, J. T., Jr.; Shi, F.; Grilli, S. T.; Hsu, T. J.

    2016-12-01

    Tsunamis can be generated by subaerial or submarine landslides in reservoirs, lakes, fjords, bays and oceans. Compared to seismogenic tsunamis, landslide or submarine mass failure (SMF) tsunamis are normally characterized by relatively shorter wave lengths and stronger wave dispersion, and potentially may generate large wave amplitudes locally and high run-up along adjacent coastlines. Due to a complex interplay between the landslide and tsunami waves, accurate simulation of landslide motion as well as tsunami generation is a challenging task. We develop and test a new two-layer model for granular landslide motion and tsunami wave generation. The landslide is described as a saturated granular flow, accounting for intergranular stresses governed by Coulomb friction. Tsunami wave generation is simulated by the three-dimensional non-hydrostatic wave model NHWAVE, which is capable of capturing wave dispersion efficiently using a small number of discretized vertical levels. Depth-averaged governing equations for the granular landslide are derived in a slope-oriented coordinate system, taking into account the dynamic interaction between the lower-layer granular landslide and upper-layer water motion. The model is tested against laboratory experiments on impulsive wave generation by subaerial granular landslides. Model results illustrate a complex interplay between the granular landslide and tsunami waves, and they reasonably predict not only the tsunami wave generation but also the granular landslide motion from initiation to deposition.

  18. Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories

    DTIC Science & Technology

    2016-04-01

    ARL-TR-7642 ● APR 2016 US Army Research Laboratory Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms... Wind Profiles and Modeling Their Effects on Small-Arms Trajectories by Timothy A Fargus Weapons and Materials Research Directorate, ARL...Generating Variable Wind Profiles and Modeling Their Effects on Small-Arms Trajectories 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  19. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  20. Validating EHR documents: automatic schematron generation using archetypes.

    PubMed

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.

  1. Study on generation investment decision-making considering multi-agent benefit for global energy internet

    NASA Astrophysics Data System (ADS)

    Li, Pai; Huang, Yuehui; Jia, Yanbing; Liu, Jichun; Niu, Yi

    2018-02-01

    Abstract . This article has studies on the generation investment decision in the background of global energy interconnection. Generation investment decision model considering the multiagent benefit is proposed. Under the back-ground of global energy Interconnection, generation investors in different clean energy base not only compete with other investors, but also facing being chosen by the power of the central area, therefor, constructing generation investment decision model considering multiagent benefit can be close to meet the interests demands. Using game theory, the complete information game model is adopted to solve the strategies of different subjects in equilibrium state.

  2. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  3. Capacity expansion model of wind power generation based on ELCC

    NASA Astrophysics Data System (ADS)

    Yuan, Bo; Zong, Jin; Wu, Shengyu

    2018-02-01

    Capacity expansion is an indispensable prerequisite for power system planning and construction. A reasonable, efficient and accurate capacity expansion model (CEM) is crucial to power system planning. In most current CEMs, the capacity of wind power generation is considered as boundary conditions instead of decision variables, which may lead to curtailment or over construction of flexible resource, especially at a high renewable energy penetration scenario. This paper proposed a wind power generation capacity value(CV) calculation method based on effective load-carrying capability, and a CEM that co-optimizes wind power generation and conventional power sources. Wind power generation is considered as decision variable in this model, and the model can accurately reflect the uncertainty nature of wind power.

  4. The new Kuznets cycle: a test of the Easterlin-Wachter-Wachter hypothesis.

    PubMed

    Ahlburg, D A

    1982-01-01

    The aim of this paper is to evaluate the Easterlin-Wachter-Wachter model of the effect of the size of one generation on the size of the succeeding generation. An attempt is made "to identify and test empirically each component of the Easterlin-Wachter-Wachter model..., to show how the components collapse to give a closed demographic model of generation size, and to investigate the impacts of relative cohort size on the economic performance of a cohort." The models derived are then used to generate forecasts of the U.S. birth rate to the year 2050. The results provide support for the major components of the original model. excerpt

  5. Design and modeling of energy generated magneto rheological damper

    NASA Astrophysics Data System (ADS)

    Ahamed, Raju; Rashid, Muhammad Mahbubur; Ferdaus, Md Meftahul; Yusof, Hazlina Md.

    2016-02-01

    In this paper an energy generated mono tube MR damper model has been developed for vehicle suspension systems. A 3D model of energy generated MR damper is developed in Solid Works electromagnetic simulator (EMS) where it is analyzed extensively by finite element method. This dynamic simulation clearly illustrates the power generation ability of the damper. Two magnetic fields are induced inside this damper. One is in the outer coil of the power generator and another is in the piston head coils. The complete magnetic isolation between these two fields is accomplished here, which can be seen in the finite element analysis. The induced magnetic flux densities, magnetic field intensities of this damper are analyzed for characterizing the damper's power generation ability. Finally, the proposed MR damper's energy generation ability was studied experimentally.

  6. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  7. Generating target system specifications from a domain model using CLIPS

    NASA Technical Reports Server (NTRS)

    Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry

    1991-01-01

    The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).

  8. A CellML simulation compiler and code generator using ODE solving schemes

    PubMed Central

    2012-01-01

    Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065

  9. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills.

    PubMed

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-06-01

    Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.

  10. An improved neutral landscape model for recreating real landscapes and generating landscape series for spatial ecological simulations.

    PubMed

    van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne

    2016-06-01

    Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.

  11. An evaluation of the transferability of cross classification trip generation models.

    DOT National Transportation Integrated Search

    1978-01-01

    This report describes the results of the application in Virginia of the trip generation procedures described in the Federal Highway Administration report entitled Trip Generation Analysis and published in 1975. Cross classification models, disaggrega...

  12. LISP based simulation generators for modeling complex space processes

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing

    1987-01-01

    The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.

  13. Loss of feed flow, steam generator tube rupture and steam line break thermohydraulic experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendler, O J; Takeuchi, K; Young, M Y

    1986-10-01

    The Westinghouse Model Boiler No. 2 (MB-2) steam generator test model at the Engineering Test Facility in Tampa, Florida, was reinstrumented and modified for performing a series of tests simulating steam generator accident transients. The transients simulated were: loss of feed flow, steam generator tube rupture, and steam line break events. This document presents a description of (1) the model boiler and the associated test facility, (2) the tests performed, and (3) the analyses of the test results.

  14. Multiple-generator errors are unavoidable under model misspecification.

    PubMed

    Jewett, D L; Zhang, Z

    1995-08-01

    Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.

  15. Single generation cycles and delayed feedback cycles are not separate phenomena.

    PubMed

    Pfaff, T; Brechtel, A; Drossel, B; Guill, C

    2014-12-01

    We study a simple model for generation cycles, which are oscillations with a period of one or a few generation times of the species. The model is formulated in terms of a single delay-differential equation for the population density of an adult stage, with recruitment to the adult stage depending on the intensity of competition during the juvenile phase. This model is a simplified version of a group of models proposed by Gurney and Nisbet, who were the first to distinguish between single-generation cycles and delayed-feedback cycles. According to these authors, the two oscillation types are caused by different mechanisms and have periods in different intervals, which are one to two generation times for single-generation cycles and two to four generation times for delayed-feedback cycles. By abolishing the strict coupling between the maturation time and the time delay between competition and its effect on the population dynamics, we find that single-generation cycles and delayed-feedback cycles occur in the same model version, with a gradual transition between the two as the model parameters are varied over a sufficiently large range. Furthermore, cycle periods are not bounded to lie within single octaves. This implies that a clear distinction between different types of generation cycles is not possible. Cycles of all periods and even chaos can be generated by varying the parameters that determine the time during which individuals from different cohorts compete with each other. This suggests that life-cycle features in the juvenile stage and during the transition to the adult stage are important determinants of the dynamics of density limited populations. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Electron beam generation in the turbulent plasma of Z-pinch discharges

    NASA Astrophysics Data System (ADS)

    Vikhrev, Victor V.; Baronova, Elena O.

    1997-05-01

    Numerical modeling of the process of electron beam generation in z-pinch discharges are presented. The proposed model represents the electron beam generation under turbulent plasma conditions. Strong current distribution inhomogeneity in the plasma column has been accounted for the adequate generation process investigation. Electron beam is generated near the maximum of compression due to run away mechanism and it is not related with the current break effect.

  17. A statistical approach for generating synthetic tip stress data from limited CPT soundings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basalams, M.K.

    CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less

  18. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  19. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  20. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  1. Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models

    NASA Astrophysics Data System (ADS)

    Xu, Shiming

    2015-04-01

    We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.

  2. Modeling the Impacts of Solar Distributed Generation on U.S. Water Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amanda, Smith; Omitaomu, Olufemi A; Jaron, Peck

    2015-01-01

    Distributed electric power generation technologies typically use little or no water per unit of electrical energy produced; in particular, renewable energy sources such as solar PV systems do not require cooling systems and present an opportunity to reduce water usage for power generation. Within the US, the fuel mix used for power generation varies regionally, and certain areas use more water for power generation than others. The need to reduce water usage for power generation is even more urgent in view of climate change uncertainties. In this paper, we present an example case within the state of Tennessee, one ofmore » the top four states in water consumption for power generation and one of the states with little or no potential for developing centralized renewable energy generations. The potential for developing PV generation within Knox County, Tennessee, is studied, along with the potential for reducing water withdrawal and consumption within the Tennessee Valley stream region. Electric power generation plants in the region are quantified for their electricity production and expected water withdrawal and consumption over one year, where electrical generation data is provided over one year and water usage is modeled based on the cooling system(s) in use. Potential solar PV electrical production is modeled based on LiDAR data and weather data for the same year. Our proposed methodology can be summarized as follows: First, the potential solar generation is compared against the local grid demand. Next, electrical generation reductions are specified that would result in a given reduction in water withdrawal and a given reduction in water consumption, and compared with the current water withdrawal and consumption rates for the existing fuel mix. The increase in solar PV development that would produce an equivalent amount of power, is determined. In this way, we consider how targeted local actions may affect the larger stream region through thoughtful energy development. This model can be applied to other regions, other types of distributed generation, and used as a framework for modeling alternative growth scenarios in power production capacity in addition to modeling adjustments to existing capacity.« less

  3. Validation of two (parametric vs non-parametric) daily weather generators

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Skalak, P.

    2015-12-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).

  4. Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks

    PubMed Central

    2017-01-01

    In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery. PMID:29392184

  5. A step-by-step development of real-size chest model for simulation of thoracoscopic surgery.

    PubMed

    Morikawa, Toshiaki; Yamashita, Makoto; Odaka, Makoto; Tsukamoto, Yo; Shibasaki, Takamasa; Mori, Shohei; Asano, Hisatoshi; Akiba, Tadashi

    2017-08-01

    For the purpose of simulating thoracoscopic surgery, we have conducted stepwise development of a life-like chest model including thorax and intrathoracic organs. First, CT data of the human chest were obtained. First-generation model: based on the CT data, each component of the chest was made from a 3D printer. A hard resin was used for the bony thorax and a rubber-like resin for the vessels and bronchi. Lung parenchyma, muscles and skin were not created. Second-generation model: in addition to the 3D printer, a cast moulding method was used. Each part was casted using a 3D printed master and then assembled. The vasculature and bronchi were casted using silicon resin. The lung parenchyma and mediastinum organs were casted using urethane foam. Chest wall and bony thorax were also casted using a silicon resin. Third-generation model: foamed polyvinyl alcohol (PVA) was newly developed and casted onto the lung parenchyma. The vasculature and bronchi were developed using a soft resin. A PVA plate was made as the mediastinum, and all were combined. The first-generation model showed real distribution of the vasculature and bronchi; it enabled an understanding of the anatomy within the lung. The second-generation model is a total chest dry model, which enabled observation of the total anatomy of the organs and thorax. The third-generation model is a wet organ model. It allowed for realistic simulation of surgical procedures, such as cutting, suturing, stapling and energy device use. This single-use model achieved realistic simulation of thoracoscopic surgery. As the generation advances, the model provides a more realistic simulation of thoracoscopic surgery. Further improvement of the model is needed. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  6. The S-curve for forecasting waste generation in construction projects.

    PubMed

    Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling

    2016-10-01

    Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  8. Fermion hierarchy from sfermion anarchy

    DOE PAGES

    Altmannshofer, Wolfgang; Frugiuele, Claudia; Harnik, Roni

    2014-12-31

    We present a framework to generate the hierarchical flavor structure of Standard Model quarks and leptons from loops of superpartners. The simplest model consists of the minimal supersymmetric standard model with tree level Yukawa couplings for the third generation only and anarchic squark and slepton mass matrices. Agreement with constraints from low energy flavor observables, in particular Kaon mixing, is obtained for supersymmetric particles with masses at the PeV scale or above. In our framework both the second and the first generation fermion masses are generated at 1-loop. Despite this, a novel mechanism generates a hierarchy among the first andmore » second generations without imposing a symmetry or small parameters. A second-to-first generation mass ratio of order 100 is typical. The minimal supersymmetric standard model thus includes all the necessary ingredients to realize a fermion spectrum that is qualitatively similar to observation, with hierarchical masses and mixing. The minimal framework produces only a few quantitative discrepancies with observation, most notably the muon mass is too low. Furthermore, we discuss simple modifications which resolve this and also investigate the compatibility of our model with gauge and Yukawa coupling Unification.« less

  9. Interactive Model-Centric Systems Engineering (IMCSE) Phase 5

    DTIC Science & Technology

    2018-02-28

    Conducting Program Team Launches ................................................................................................. 12 Informing Policy...research advances knowledge relevant to human interaction with models and model-generated information . Figure 1 highlights several questions the...stakeholders interact using models and model generated information ; facets of human interaction with visualizations and large data sets; and underlying

  10. 75 FR 39790 - Airworthiness Directives; Airbus Model A330-200 and -300 Airplanes and Model A340-200, -300, -500...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ..., requiring repetitive inspections of the APU generator Scavenge filter element and filter housing and of the.... The new requirements include inspecting the APU generator scavenge oil filter element for contamination, the APU generator drain plug for contamination, and the APU generator scavenge filter housing for...

  11. Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.

    PubMed

    Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei

    2016-02-01

    GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Generation and comparison of CRISPR-Cas9 and Cre-mediated genetically engineered mouse models of sarcoma

    PubMed Central

    Huang, Jianguo; Chen, Mark; Whitley, Melodi Javid; Kuo, Hsuan-Cheng; Xu, Eric S.; Walens, Andrea; Mowery, Yvonne M.; Van Mater, David; Eward, William C.; Cardona, Diana M.; Luo, Lixia; Ma, Yan; Lopez, Omar M.; Nelson, Christopher E.; Robinson-Hamm, Jacqueline N.; Reddy, Anupama; Dave, Sandeep S.; Gersbach, Charles A.; Dodd, Rebecca D.; Kirsch, David G.

    2017-01-01

    Genetically engineered mouse models that employ site-specific recombinase technology are important tools for cancer research but can be costly and time-consuming. The CRISPR-Cas9 system has been adapted to generate autochthonous tumours in mice, but how these tumours compare to tumours generated by conventional recombinase technology remains to be fully explored. Here we use CRISPR-Cas9 to generate multiple subtypes of primary sarcomas efficiently in wild type and genetically engineered mice. These data demonstrate that CRISPR-Cas9 can be used to generate multiple subtypes of soft tissue sarcomas in mice. Primary sarcomas generated with CRISPR-Cas9 and Cre recombinase technology had similar histology, growth kinetics, copy number variation and mutational load as assessed by whole exome sequencing. These results show that sarcomas generated with CRISPR-Cas9 technology are similar to sarcomas generated with conventional modelling techniques and suggest that CRISPR-Cas9 can be used to more rapidly generate genotypically and phenotypically similar cancers. PMID:28691711

  13. Predictive model for CO2 generation and decay in building envelopes

    NASA Astrophysics Data System (ADS)

    Aglan, Heshmat A.

    2003-01-01

    Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.

  14. Genome Editing in Rats Using TALE Nucleases.

    PubMed

    Tesson, Laurent; Remy, Séverine; Ménoret, Séverine; Usal, Claire; Thinard, Reynald; Savignard, Chloé; De Cian, Anne; Giovannangeli, Carine; Concordet, Jean-Paul; Anegon, Ignacio

    2016-01-01

    The rat is an important animal model to understand gene function and model human diseases. Since recent years, the development of gene-specific nucleases has become important for generating new rat models of human diseases, to analyze the role of genes and to generate human antibodies. Transcription activator-like (TALE) nucleases efficiently create gene-specific knockout rats and lead to the possibility of gene targeting by homology-directed recombination (HDR) and generating knock-in rats. We describe a detailed protocol for generating knockout and knock-in rats via microinjection of TALE nucleases into fertilized eggs. This technology is an efficient, cost- and time-effective method for creating new rat models.

  15. Veterans' informal caregivers in the "sandwich generation": a systematic review toward a resilience model.

    PubMed

    Smith-Osborne, Alexa; Felderhoff, Brandi

    2014-01-01

    Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.

  16. A study of the kinetic energy generation with general circulation models

    NASA Technical Reports Server (NTRS)

    Chen, T.-C.; Lee, Y.-H.

    1983-01-01

    The history data of winter simulation by the GLAS climate model and the NCAR community climate model are used to examine the generation of atmospheric kinetic energy. The contrast between the geographic distributions of the generation of kinetic energy and divergence of kinetic energy flux shows that kinetic energy is generated in the upstream side of jets, transported to the downstream side and destroyed there. The contributions from the time-mean and transient modes to the counterbalance between generation of kinetic energy and divergence of kinetic energy flux are also investigated. It is observed that the kinetic energy generated by the time-mean mode is essentially redistributed by the time-mean flow, while that generated by the transient flow is mainly responsible for the maintenance of the kinetic energy of the entire atmospheric flow.

  17. Implications of Higgs searches on the four-generation standard model.

    PubMed

    Kuflik, Eric; Nir, Yosef; Volansky, Tomer

    2013-03-01

    Within the four-generation standard model, the Higgs couplings to gluons and to photons deviate in a significant way from the predictions of the three-generation standard model. As a consequence, large departures in several Higgs production and decay channels are expected. Recent Higgs search results, presented by ATLAS, CMS, and CDF, hint on the existence of a Higgs boson with a mass around 125 GeV. Using these results and assuming such a Higgs boson, we derive exclusion limits on the four-generation standard model. For m(H)=125 GeV, the model is excluded above 99.95% confidence level. For 124.5 GeV≤m(H)≤127.5 GeV, an exclusion limit above 99% confidence level is found.

  18. Comparison of the Battery Life of Nonrechargeable Generators for Deep Brain Stimulation.

    PubMed

    Helmers, Ann-Kristin; Lübbing, Isabel; Deuschl, Günther; Witt, Karsten; Synowitz, Michael; Mehdorn, Hubertus Maximilian; Falk, Daniela

    2017-11-03

    Nonrechargeable deep brain stimulation (DBS) generators must be replaced when the battery capacity is exhausted. Battery life depends on many factors and differs between generator models. A new nonrechargeable generator model replaced the previous model in 2008. Our clinical impression is that the earlier model had a longer battery life than the new one. We conducted this study to substantiate this. We determined the battery life of every DBS generator that had been implanted between 2005 and 2012 in our department for the treatment of Parkinson's disease, and compared the battery lives of the both devices. We calculated the current used by estimating the total electrical energy delivered (TEED) based on the stimulation parameters in use one year after electrode implantation. One hundred ninety-two patients were included in the study; 105 with the old and 86 with the new model generators. The mean battery life in the older model was significantly longer (5.44 ± 0.20 years) than that in the new model (4.44 ± 0.17 years) (p = 0.023). The mean TEED without impedance was 219.9 ± 121.5 mW * Ω in the older model and 145.1 ± 72.7 mW * Ω in the new one, which indicated significantly lower stimulation parameters in the new model (p = 0.00038). The battery life of the new model was significantly shorter than that of the previous model. A lower battery capacity is the most likely reason, since current consumption was similar in both groups. © 2017 International Neuromodulation Society.

  19. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less

  20. Spherical harmonic analysis of a synoptic climatology generated with a global general circulation model

    NASA Technical Reports Server (NTRS)

    Christidis, Z. D.; Spar, J.

    1980-01-01

    Spherical harmonic analysis was used to analyze the observed climatological (C) fields of temperature at 850 mb, geopotential height at 500 mb, and sea level pressure. The spherical harmonic method was also applied to the corresponding "model climatological" fields (M) generated by a general circulation model, the "GISS climate model." The climate model was initialized with observed data for the first of December 1976 at 00. GMT and allowed to generate five years of meteorological history. Monthly means of the above fields for the five years were computed and subjected to spherical harmonic analysis. It was found from the comparison of the spectral components of both sets, M and C, that the climate model generated reasonable 500 mb geopotential heights. The model temperature field at 850 mb exhibited a generally correct structure. However, the meridional temperature gradient was overestimated and overheating of the continents was observed in summer.

  1. Pitching motion control of a butterfly-like 3D flapping wing-body model

    NASA Astrophysics Data System (ADS)

    Suzuki, Kosuke; Minami, Keisuke; Inamuro, Takaji

    2014-11-01

    Free flights and a pitching motion control of a butterfly-like flapping wing-body model are numerically investigated by using an immersed boundary-lattice Boltzmann method. The model flaps downward for generating the lift force and backward for generating the thrust force. Although the model can go upward against the gravity by the generated lift force, the model generates the nose-up torque, consequently gets off-balance. In this study, we discuss a way to control the pitching motion by flexing the body of the wing-body model like an actual butterfly. The body of the model is composed of two straight rigid rod connected by a rotary actuator. It is found that the pitching angle is suppressed in the range of +/-5° by using the proportional-plus-integral-plus-derivative (PID) control for the input torque of the rotary actuator.

  2. Bootstrap data methodology for sequential hybrid model building

    NASA Technical Reports Server (NTRS)

    Volponi, Allan J. (Inventor); Brotherton, Thomas (Inventor)

    2007-01-01

    A method for modeling engine operation comprising the steps of: 1. collecting a first plurality of sensory data, 2. partitioning a flight envelope into a plurality of sub-regions, 3. assigning the first plurality of sensory data into the plurality of sub-regions, 4. generating an empirical model of at least one of the plurality of sub-regions, 5. generating a statistical summary model for at least one of the plurality of sub-regions, 6. collecting an additional plurality of sensory data, 7. partitioning the second plurality of sensory data into the plurality of sub-regions, 8. generating a plurality of pseudo-data using the empirical model, and 9. concatenating the plurality of pseudo-data and the additional plurality of sensory data to generate an updated empirical model and an updated statistical summary model for at least one of the plurality of sub-regions.

  3. An optimal design of coreless direct-drive axial flux permanent magnet generator for wind turbine

    NASA Astrophysics Data System (ADS)

    Ahmed, D.; Ahmad, A.

    2013-06-01

    Different types of generators are currently being used in wind power technology. The commonly used are induction generator (IG), doubly-fed induction generator (DFIG), electrically excited synchronous generator (EESG) and permanent magnet synchronous generator (PMSG). However, the use of PMSG is rapidly increasing because of advantages such as higher power density, better controllability and higher reliability. This paper presents an innovative design of a low-speed modular, direct-drive axial flux permanent magnet (AFPM) generator with coreless stator and rotor for a wind turbine power generation system that is developed using mathematical and analytical methods. This innovative design is implemented in MATLAB / Simulink environment using dynamic modelling techniques. The main focus of this research is to improve efficiency of the wind power generation system by investigating electromagnetic and structural features of AFPM generator during its operation in wind turbine. The design is validated by comparing its performance with standard models of existing wind power generators. The comparison results demonstrate that the proposed model for the wind power generator exhibits number of advantages such as improved efficiency with variable speed operation, higher energy yield, lighter weight and better wind power utilization.

  4. Generative Models of Disfluency

    ERIC Educational Resources Information Center

    Miller, Timothy A.

    2010-01-01

    This thesis describes a generative model for representing disfluent phenomena in human speech. This model makes use of observed syntactic structure present in disfluent speech, and uses a right-corner transform on syntax trees to model this structure in a very natural way. Specifically, the phenomenon of speech repair is modeled by explicitly…

  5. Solid waste forecasting using modified ANFIS modeling.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; K N A, Maulud

    2015-10-01

    Solid waste prediction is crucial for sustainable solid waste management. Usually, accurate waste generation record is challenge in developing countries which complicates the modelling process. Solid waste generation is related to demographic, economic, and social factors. However, these factors are highly varied due to population and economy growths. The objective of this research is to determine the most influencing demographic and economic factors that affect solid waste generation using systematic approach, and then develop a model to forecast solid waste generation using a modified Adaptive Neural Inference System (MANFIS). The model evaluation was performed using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (R²). The results show that the best input variables are people age groups 0-14, 15-64, and people above 65 years, and the best model structure is 3 triangular fuzzy membership functions and 27 fuzzy rules. The model has been validated using testing data and the resulted training RMSE, MAE and R² were 0.2678, 0.045 and 0.99, respectively, while for testing phase RMSE =3.986, MAE = 0.673 and R² = 0.98. To date, a few attempts have been made to predict the annual solid waste generation in developing countries. This paper presents modeling of annual solid waste generation using Modified ANFIS, it is a systematic approach to search for the most influencing factors and then modify the ANFIS structure to simplify the model. The proposed method can be used to forecast the waste generation in such developing countries where accurate reliable data is not always available. Moreover, annual solid waste prediction is essential for sustainable planning.

  6. Evaluation of Extratropical Cyclone Precipitation in the North Atlantic Basin: An analysis of ERA-Interim, WRF, and two CMIP5 models.

    PubMed

    Booth, James F; Naud, Catherine M; Willison, Jeff

    2018-03-01

    The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.

  7. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  8. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE PAGES

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    2017-08-23

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  9. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  10. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  11. An artificial EMG generation model based on signal-dependent noise and related application to motion classification

    PubMed Central

    Hayashi, Hideaki; Nakamura, Go; Chin, Takaaki; Tsuji, Toshio

    2017-01-01

    This paper proposes an artificial electromyogram (EMG) signal generation model based on signal-dependent noise, which has been ignored in existing methods, by introducing the stochastic construction of the EMG signals. In the proposed model, an EMG signal variance value is first generated from a probability distribution with a shape determined by a commanded muscle force and signal-dependent noise. Artificial EMG signals are then generated from the associated Gaussian distribution with a zero mean and the generated variance. This facilitates representation of artificial EMG signals with signal-dependent noise superimposed according to the muscle activation levels. The frequency characteristics of the EMG signals are also simulated via a shaping filter with parameters determined by an autoregressive model. An estimation method to determine EMG variance distribution using rectified and smoothed EMG signals, thereby allowing model parameter estimation with a small number of samples, is also incorporated in the proposed model. Moreover, the prediction of variance distribution with strong muscle contraction from EMG signals with low muscle contraction and related artificial EMG generation are also described. The results of experiments conducted, in which the reproduction capability of the proposed model was evaluated through comparison with measured EMG signals in terms of amplitude, frequency content, and EMG distribution demonstrate that the proposed model can reproduce the features of measured EMG signals. Further, utilizing the generated EMG signals as training data for a neural network resulted in the classification of upper limb motion with a higher precision than by learning from only measured EMG signals. This indicates that the proposed model is also applicable to motion classification. PMID:28640883

  12. Petroleum generation and migration in the Mesopotamian Basin and Zagros fold belt of Iraq: Results from a basin-modeling study

    USGS Publications Warehouse

    Pitman, Janet K.; Steinshouer, D.; Lewan, M.D.

    2004-01-01

    A regional 3-D total petroleum-system model was developed to evaluate petroleum generation and migration histories in the Mesopotamian Basin and Zagros fold belt in Iraq. The modeling was undertaken in conjunction with Middle East petroleum assessment studies conducted by the USGS. Regional structure maps, isopach and facies maps, and thermal maturity data were used as input to the model. The oil-generation potential of Jurassic source-rocks, the principal known source of the petroleum in Jurassic, Cretaceous, and Tertiary reservoirs in these regions, was modeled using hydrous pyrolysis (Type II-S) kerogen kinetics. Results showed that oil generation in source rocks commenced in the Late Cretaceous in intrashelf basins, peak expulsion took place in the late Miocene and Pliocene when these depocenters had expanded along the Zagros foredeep trend, and generation ended in the Holocene when deposition in the foredeep ceased. The model indicates that, at present, the majority of Jurassic source rocks in Iraq have reached or exceeded peak oil generation and most rocks have completed oil generation and expulsion. Flow-path simulations demonstrate that virtually all oil and gas fields in the Mesopotamian Basin and Zagros fold belt overlie mature Jurassic source rocks (vertical migration dominated) and are situated on, or close to, modeled migration pathways. Fields closest to modeled pathways associated with source rocks in local intrashelf basins were charged earliest from Late Cretaceous through the middle Miocene, and other fields filled later when compression-related traps were being formed. Model results confirm petroleum migration along major, northwest-trending folds and faults, and oil migration loss at the surface.

  13. Supervised Learning Based Hypothesis Generation from Biomedical Literature.

    PubMed

    Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei

    2015-01-01

    Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.

  14. Modeling Renewable Penertration Using a Network Economic Model

    NASA Astrophysics Data System (ADS)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  15. Three-dimensional modeling of the cochlea by use of an arc fitting approach.

    PubMed

    Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S

    2016-12-01

    A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.

  16. A generative probabilistic model and discriminative extensions for brain lesion segmentation – with application to tumor and stroke

    PubMed Central

    Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-01-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702

  17. Forward modeling of gravity data using geostatistically generated subsurface density variations

    USGS Publications Warehouse

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  18. Dynamic Modeling and Grid Interaction of a Tidal and River Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Gevorgian, Vahan; Donegan, James

    This presentation provides a high-level overview of the deployment of a river generator installed in a small system. The turbine dynamics of a river generator, electrical generator, and power converter are modeled in detail. Various simulations can be exercised, and the impact of different control algorithms, failures of power switches, and corresponding impacts can be examined.

  19. Deep Generative Models of Galaxy Images for the Calibration of the Next Generation of Weak Lensing Surveys

    NASA Astrophysics Data System (ADS)

    Lanusse, Francois; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Schneider, Jeff; Poczos, Barnabas

    2017-01-01

    Weak gravitational lensing has long been identified as one of the most powerful probes to investigate the nature of dark energy. As such, weak lensing is at the heart of the next generation of cosmological surveys such as LSST, Euclid or WFIRST.One particularly crititcal source of systematic errors in these surveys comes from the shape measurement algorithms tasked with estimating galaxy shapes. GREAT3, the last community challenge to assess the quality of state-of-the-art shape measurement algorithms has in particular demonstrated that all current methods are biased to various degrees and, more importantly, that these biases depend on the details of the galaxy morphologies. These biases can be measured and calibrated by generating mock observations where a known lensing signal has been introduced and comparing the resulting measurements to the ground-truth. Producing these mock observations however requires input galaxy images of higher resolution and S/N than the simulated survey, which typically implies acquiring extremely expensive space-based observations.The goal of this work is to train a deep generative model on already available Hubble Space Telescope data which can then be used to sample new galaxy images conditioned on parameters such as magnitude, size or redshift and exhibiting complex morphologies. Such model can allow us to inexpensively produce large set of realistic realistic images for calibration purposes.We implement a conditional generative model based on state-of-the-art deep learning methods and fit it to deep galaxy images from the COSMOS survey. The quality of the model is assessed by computing an extensive set of galaxy morphology statistics on the generated images. Beyond simple second moment statistics such as size and ellipticity, we apply more complex statistics specifically designed to be sensitive to disturbed galaxy morphologies. We find excellent agreement between the morphologies of real and model generated galaxies.Our results suggest that such deep generative models represent a reliable alternative to the acquisition of expensive high quality observations for generating the calibration data needed by the next generation of weak lensing surveys.

  20. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  1. Gray correlation analysis and prediction models of living refuse generation in Shanghai city.

    PubMed

    Liu, Gousheng; Yu, Jianguo

    2007-01-01

    A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.

  2. Generation Of A Mouse Model For Schwannomatosis

    DTIC Science & Technology

    2010-09-01

    TITLE: Generation of a Mouse Model for Schwannomatosis PRINCIPAL INVESTIGATOR: Long-Sheng Chang, Ph.D. CONTRACTING ORGANIZATION: The...Annual 3. DATES COVERED (From - To) 1 Sep 2009 - 31 Aug 2010 4. TITLE AND SUBTITLE Generation of a Mouse Model for Schwannomatosis 5a. CONTRACT...hypothesis involving inactivation of both the INI1/SNF5 and NF2 tumor suppressor genes in the formation of schwannomatosis -associated tumors. To

  3. Summary of development of 70 MW class model superconducting generator--research and development of superconducting for electric power application

    NASA Astrophysics Data System (ADS)

    Oishi, Ikuo; Nishijima, Kenichi

    2002-03-01

    A 70 MW class superconducting model generator was designed, manufactured, and tested from 1988 to 1999 as Phase I, which was Japan's national project on applications of superconducting technologies to electric power apparatuses that was commissioned by NEDO as part of New Sunshine Program of AIST and MITI. Phase II then is now being carried out by almost same organization as Phase I. With the development of the 70 MW class superconducting model generator, technologies for a 200 MW class pilot generator were established. The world's largest output (79 MW), world's longest continuous operation (1500 h), and other sufficient characteristics were achieved on the 70 MW class superconducting model generator, and key technologies of design and manufacture required for the 200 MW class pilot generator were established. This project contributed to progress of R&D of power apparatuses. Super-GM has started the next project (Phase II), which shall develop the key technologies for larger-capacity and more-compact machine and is scheduled from 2000 to 2003. Phase II shall be the first step for commercialization of superconducting generator.

  4. Dynamic Model and Control of a Photovoltaic Generation System using Energetic Macroscopic Representation

    NASA Astrophysics Data System (ADS)

    Solano, Javier; Duarte, José; Vargas, Erwin; Cabrera, Jhon; Jácome, Andrés; Botero, Mónica; Rey, Juan

    2016-10-01

    This paper addresses the Energetic Macroscopic Representation EMR, the modelling and the control of photovoltaic panel PVP generation systems for simulation purposes. The model of the PVP considers the variations on irradiance and temperature. A maximum power point tracking MPPT algorithm is considered to control the power converter. A novel EMR is proposed to consider the dynamic model of the PVP with variations in the irradiance and the temperature. The EMR is evaluated through simulations of a PVP generation system.

  5. Next generation initiation techniques

    NASA Technical Reports Server (NTRS)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.

  6. Retrofitting and the mu Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Daniel; Weigand, Timo; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    One of the challenges of supersymmetry (SUSY) breaking and mediation is generating a {mu} term consistent with the requirements of electro-weak symmetry breaking. The most common approach to the problem is to generate the {mu} term through a SUSY breaking F-term. Often these models produce unacceptably large B{mu} terms as a result. We will present an alternate approach, where the {mu} term is generated directly by non-perturtative effects. The same non-perturbative effect will also retrofit the model of SUSY breaking in such a way that {mu} is at the same scale as masses of the Standard Model superpartners. Because themore » {mu} term is not directly generated by SUSY breaking effects, there is no associated B{mu} problem. These results are demonstrated in a toy model where a stringy instanton generates {mu}.« less

  7. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    PubMed

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  8. Distributed Generation Market Demand Model | NREL

    Science.gov Websites

    Demand Model The Distributed Generation Market Demand (dGen) model simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the dGen model can help develop deployment forecasts for distributed resources, including sensitivity to

  9. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  10. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  11. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-12-18

    This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  12. Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment

    PubMed Central

    Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang

    2005-01-01

    Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094

  13. Implementation of a next-generation electronic nursing records system based on detailed clinical models and integration of clinical practice guidelines.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook

    2013-12-01

    The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.

  14. Real-time simulation of a Doubly-Fed Induction Generator based wind power system on eMEGASimRTM Real-Time Digital Simulator

    NASA Astrophysics Data System (ADS)

    Boakye-Boateng, Nasir Abdulai

    The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.

  15. Application for managing model-based material properties for simulation-based engineering

    DOEpatents

    Hoffman, Edward L [Alameda, CA

    2009-03-03

    An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.

  16. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  17. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  18. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  19. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation

    PubMed Central

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime

    2017-01-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022

  20. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation.

    PubMed

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J

    2018-04-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.

  1. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    PubMed Central

    Jensen, Tue V.; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation. PMID:29182600

  2. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    PubMed

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  3. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  4. The Oak Ridge Competitive Electricity Dispatch (ORCED) Model Version 9

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W.; Baek, Young Sun

    The Oak Ridge Competitive Electricity Dispatch (ORCED) model dispatches power plants in a region to meet the electricity demands for any single given year up to 2030. It uses publicly available sources of data describing electric power units such as the National Energy Modeling System and hourly demands from utility submittals to the Federal Energy Regulatory Commission that are projected to a future year. The model simulates a single region of the country for a given year, matching generation to demands and predefined net exports from the region, assuming no transmission constraints within the region. ORCED can calculate a numbermore » of key financial and operating parameters for generating units and regional market outputs including average and marginal prices, air emissions, and generation adequacy. By running the model with and without changes such as generation plants, fuel prices, emission costs, plug-in hybrid electric vehicles, distributed generation, or demand response, the marginal impact of these changes can be found.« less

  5. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less

  6. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-02-01

    In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.

  7. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  8. Mechanism of the free charge carrier generation in the dielectric breakdown

    NASA Astrophysics Data System (ADS)

    Rahim, N. A. A.; Ranom, R.; Zainuddin, H.

    2017-12-01

    Many studies have been conducted to investigate the effect of environmental, mechanical and electrical stresses on insulator. However, studies on physical process of discharge phenomenon, leading to the breakdown of the insulator surface are lacking and difficult to comprehend. Therefore, this paper analysed charge carrier generation mechanism that can cause free charge carrier generation, leading toward surface discharge development. Besides, this paper developed a model of surface discharge based on the charge generation mechanism on the outdoor insulator. Nernst’s Planck theory was used in order to model the behaviour of the charge carriers while Poisson’s equation was used to determine the distribution of electric field on insulator surface. In the modelling of surface discharge on the outdoor insulator, electric field dependent molecular ionization was used as the charge generation mechanism. A mathematical model of the surface discharge was solved using method of line technique (MOL). The result from the mathematical model showed that the behaviour of net space charge density was correlated with the electric field distribution.

  9. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  10. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    NASA Astrophysics Data System (ADS)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  11. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  12. Universities as Hubs for Next-Generation Networks: A Model for Universities to Spur 21st Century Internet Access and Innovation in Their Communities

    ERIC Educational Resources Information Center

    Lennett, Benjamin; Morris, Sarah J.; Byrum, Greta

    2012-01-01

    Based on a request for information (RFI) submitted to The University Community Next Generation Innovation Project (Gig.U), the paper describes a model for universities to develop next generation broadband infrastructure in their communities. In the our view universities can play a critical role in spurring next generation networks into their…

  13. Learning as a Generative Process

    ERIC Educational Resources Information Center

    Wittrock, M. C.

    2010-01-01

    A cognitive model of human learning with understanding is introduced. Empirical research supporting the model, which is called the generative model, is summarized. The model is used to suggest a way to integrate some of the research in cognitive development, human learning, human abilities, information processing, and aptitude-treatment…

  14. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  15. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  16. Alterations in choice behavior by manipulations of world model

    PubMed Central

    Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.

    2010-01-01

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507

  17. Numerical modeling of particle generation from ozone reactions with human-worn clothing in indoor environments

    NASA Astrophysics Data System (ADS)

    Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan

    2015-02-01

    Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.

  18. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  19. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  20. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  1. Coupling LaGrit unstructured mesh generation and model setup with TOUGH2 flow and transport: A case study

    DOE PAGES

    Sentis, Manuel Lorenzo; Gable, Carl W.

    2017-06-15

    Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less

  2. TH-CD-202-07: A Methodology for Generating Numerical Phantoms for Radiation Therapy Using Geometric Attribute Distribution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Chen, H; Mutic, S

    Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less

  3. Coupling LaGrit unstructured mesh generation and model setup with TOUGH2 flow and transport: A case study

    NASA Astrophysics Data System (ADS)

    Sentís, Manuel Lorenzo; Gable, Carl W.

    2017-11-01

    There are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools will provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 (Pruess et al., 1999) to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. In this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.

  4. Supplantation versus Generative Models: Implications for Designers of Instructional Text.

    ERIC Educational Resources Information Center

    Smith, Patricia L.

    Two instructional design alternatives are described and discussed: (1) the supplantation model of Ausburn and Ausburn (1978), where learning strategies are built into the instructional materials; and (2) a generative design model, where strategies are "built" into the learner. These contrasting models are proposed as representing the…

  5. Generation of topographic terrain models utilizing synthetic aperture radar and surface level data

    NASA Technical Reports Server (NTRS)

    Imhoff, Marc L. (Inventor)

    1991-01-01

    Topographical terrain models are generated by digitally delineating the boundary of the region under investigation from the data obtained from an airborne synthetic aperture radar image and surface elevation data concurrently acquired either from an airborne instrument or at ground level. A set of coregistered boundary maps thus generated are then digitally combined in three dimensional space with the acquired surface elevation data by means of image processing software stored in a digital computer. The method is particularly applicable for generating terrain models of flooded regions covered entirely or in part by foliage.

  6. Theoretical and simulation analysis of piezoelectric liquid resistance captor filled with pipeline

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Zhigang, Yang; Junwu, Kan; Lisheng; Bo, Yan; Dan, Lu

    2018-03-01

    This paper designs a kind of Piezoelectric liquid resistance capture energy device, by using the superposition theory of the sheet deformation, the calculation model of the displacement curve of the circular piezoelectric vibrator and the power generation capacity under the concentrated load is established. The results show that the radius ratio, thickness ratio and Young’s modulus of the circular piezoelectric vibrator have greater influence on the power generation capacity. When the material of piezoelectric oscillator is determined, the best radius ratio and thickness ratio make the power generation capacity the largest. Excessive or small radius ratio and thickness ratio will reduce the generating capacity and even generate zero power. In addition, the electromechanical equivalent model is established. Equivalent analysis is made by changing the circuit impedance. The results are consistent with the theoretical simulation results, indicating that the established circuit model can truly reflect the characteristics of the theoretical model.

  7. An empirical generative framework for computational modeling of language acquisition.

    PubMed

    Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-06-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.

  8. Structural Design Optimization of Doubly-Fed Induction Generators Using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L

    2017-11-13

    A wind turbine with a larger rotor swept area can generate more electricity, however, this increases costs disproportionately for manufacturing, transportation, and installation. This poster presents analytical models for optimizing doubly-fed induction generators (DFIGs), with the objective of reducing the costs and mass of wind turbine drivetrains. The structural design for the induction machine includes models for the casing, stator, rotor, and high-speed shaft developed within the DFIG module in the National Renewable Energy Laboratory's wind turbine sizing tool, GeneratorSE. The mechanical integrity of the machine is verified by examining stresses, structural deflections, and modal properties. The optimization results aremore » then validated using finite element analysis (FEA). The results suggest that our analytical model correlates with the FEA in some areas, such as radial deflection, differing by less than 20 percent. But the analytical model requires further development for axial deflections, torsional deflections, and stress calculations.« less

  9. Simulation for Grid Connected Wind Turbines with Fluctuating

    NASA Astrophysics Data System (ADS)

    Ye, Ying; Fu, Yang; Wei, Shurong

    This paper establishes the whole dynamic model of wind turbine generator system which contains the wind speed model and DFIG wind turbines model .A simulation sample based on the mathematical models is built by using MATLAB in this paper. Research are did on the performance characteristics of doubly-fed wind generators (DFIG) which connected to power grid with three-phase ground fault and the disturbance by gust and mixed wind. The capacity of the wind farm is 9MW which consists of doubly-fed wind generators (DFIG). Simulation results demonstrate that the three-phase ground fault occurs on grid side runs less affected on the stability of doubly-fed wind generators. However, as a power source, fluctuations of the wind speed will run a large impact on stability of double-fed wind generators. The results also show that if the two disturbances occur in the meantime, the situation will be very serious.

  10. Using Model-Based Systems Engineering To Provide Artifacts for NASA Project Life-Cycle and Technical Reviews

    NASA Technical Reports Server (NTRS)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    The ability of systems engineers to use model-based systems engineering (MBSE) to generate self-consistent, up-to-date systems engineering products for project life-cycle and technical reviews is an important aspect for the continued and accelerated acceptance of MBSE. Currently, many review products are generated using labor-intensive, error-prone approaches based on documents, spreadsheets, and chart sets; a promised benefit of MBSE is that users will experience reductions in inconsistencies and errors. This work examines features of SysML that can be used to generate systems engineering products. Model elements, relationships, tables, and diagrams are identified for a large number of the typical systems engineering artifacts. A SysML system model can contain and generate most systems engineering products to a significant extent and this paper provides a guide on how to use MBSE to generate products for project life-cycle and technical reviews. The use of MBSE can reduce the schedule impact usually experienced for review preparation, as in many cases the review products can be auto-generated directly from the system model. These approaches are useful to systems engineers, project managers, review board members, and other key project stakeholders.

  11. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.

    PubMed

    Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen

    2010-12-21

    There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.

  12. 78 FR 58492 - Generator Verification Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ... Control Functions), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or...), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or Active Power... Category B and C contingencies, as required by wind generators in Order No. 661, or that those generators...

  13. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

    PubMed Central

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262

  14. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  15. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    PubMed

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  16. A Structural Model Decomposition Framework for Hybrid Systems Diagnosis

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2015-01-01

    Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.

  17. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  18. Modeling of Thermoelectric Generator Power Characteristics for Motorcycle-Type Engines

    NASA Astrophysics Data System (ADS)

    Osipkov, Alexey; Poshekhonov, Roman; Arutyunyan, Georgy; Basov, Andrey; Safonov, Roman

    2017-10-01

    Thermoelectric generation in vehicles such as motorcycles, all-terrain vehicles, and snowmobiles opens the possibility of additional electrical energy generation by means of exhaust heat utilization. This is beneficial because replacing the mechanical generator used in such vehicles with a more powerful one in cases of electrical power deficiency is impossible. This paper proposes a calculation model for the thermoelectric generator (TEG) operational characteristics of the low-capacity internal combustion engines used in these vehicles. Two TEG structures are considered: (1) TEG with air cooling and (2) TEG with water cooling. Modeling consists of two calculation stages. In the first stage, the heat exchange coefficients of the hot and cold exchangers are determined using computational fluid dynamics. In the second stage, the TEG operational characteristics are modeled based on the nonlinear equations of the heat transfer and power balance. On the basis of the modeling results, the dependence of the TEG's major operating characteristics (such as the electrical power generated by the TEG and its efficiency and mass) on operating conditions or design parameters is determined. For example, the electrical power generated by a TEG for a Yamaha WR450F motorcycle engine with a volume of 0.449 × 10-3 m3 was calculated to be as much as 100 W. Use of the TEG arrangements proposed is justified by the additional electrical power generation for small capacity vehicles, without the need for internal combustion engine redesign.

  19. The impacts of renewable energy policies on renewable energy sources for electricity generating capacity

    NASA Astrophysics Data System (ADS)

    Koo, Bryan Bonsuk

    Electricity generation from non-hydro renewable sources has increased rapidly in the last decade. For example, Renewable Energy Sources for Electricity (RES-E) generating capacity in the U.S. almost doubled for the last three year from 2009 to 2012. Multiple papers point out that RES-E policies implemented by state governments play a crucial role in increasing RES-E generation or capacity. This study examines the effects of state RES-E policies on state RES-E generating capacity, using a fixed effects model. The research employs panel data from the 50 states and the District of Columbia, for the period 1990 to 2011, and uses a two-stage approach to control endogeneity embedded in the policies adopted by state governments, and a Prais-Winsten estimator to fix any autocorrelation in the panel data. The analysis finds that Renewable Portfolio Standards (RPS) and Net-metering are significantly and positively associated with RES-E generating capacity, but neither Public Benefit Funds nor the Mandatory Green Power Option has a statistically significant relation to RES-E generating capacity. Results of the two-stage model are quite different from models which do not employ predicted policy variables. Analysis using non-predicted variables finds that RPS and Net-metering policy are statistically insignificant and negatively associated with RES-E generating capacity. On the other hand, Green Energy Purchasing policy is insignificant in the two-stage model, but significant in the model without predicted values.

  20. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    NASA Astrophysics Data System (ADS)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  1. Generating Three-Dimensional Surface Models of Solid Objects from Multiple Projections.

    DTIC Science & Technology

    1982-10-01

    volume descriptions. The surface models are composed of curved, topologically rectangular, parametric patches. The data required to define these patches...geometry directly from image data .__ This method generates 3D surface descriptions of only those parts of the object that are illuminated by the pro- jected...objects. Generation of such models inherently requires the acquisition and analysis of 3D surface data . In this context, acquisition refers to the

  2. The Sortie-Generation Model System. Volume 5. Maintenance Subsystem

    DTIC Science & Technology

    1981-09-01

    Compuger RoanutI f and moidel 11, Computer operatinS system 17, Proorammino largualviso IS. Numlier of .ugic proltsm Hoewl -3 CSCobol 600 stuscomentm...THE SORTIE-GENERATION MODEL SYSTEM OC’ VOLUME V MAINTENANCE SUBSYSTEM September 1981 Robert S. Greenberg 05$ Prepared pursuant to Department of...Generation Model System Volume V Maintenance Subsystem 6. PERFORMING ORG. REPORT NUMBER LMI Task- L102 7. AUTHOR(a) 8. CONTRACT OR GRANT NUMBER(a

  3. Test Generator for MATLAB Simulations

    NASA Technical Reports Server (NTRS)

    Henry, Joel

    2011-01-01

    MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.

  4. Technical note: A linear model for predicting δ13 Cprotein.

    PubMed

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  5. Transient Control of Synchronous Machine Active and Reactive Power in Micro-grid Power Systems

    NASA Astrophysics Data System (ADS)

    Weber, Luke G.

    There are two main topics associated with this dissertation. The first is to investigate phase-to-neutral fault current magnitude occurring in generators with multiple zero-sequence current sources. The second is to design, model, and tune a linear control system for operating a micro-grid in the event of a separation from the electric power system. In the former case, detailed generator, AC8B excitation system, and four-wire electric power system models are constructed. Where available, manufacturers data is used to validate the generator and exciter models. A gain-delay with frequency droop control is used to model an internal combustion engine and governor. The four wire system is connected through a transformer impedance to an infinite bus. Phase-to-neutral faults are imposed on the system, and fault magnitudes analyzed against three-phase faults to gauge their severity. In the latter case, a balanced three-phase system is assumed. The model structure from the former case - but using data for a different generator - is incorporated with a model for an energy storage device and a net load model to form a micro-grid. The primary control model for the energy storage device has a high level of detail, as does the energy storage device plant model in describing the LC filter and transformer. A gain-delay battery and inverter model is used at the front end. The net load model is intended to be the difference between renewable energy sources and load within a micro-grid system that has separated from the grid. Given the variability of both renewable generation and load, frequency and voltage stability are not guaranteed. This work is an attempt to model components of a proposed micro-grid system at the University of Wisconsin Milwaukee, and design, model, and tune a linear control system for operation in the event of a separation from the electric power system. The control module is responsible for management of frequency and active power, and voltage and reactive power. The scope of this work is to • develop a mathematical model for a salient pole, 2 damper winding synchronous generator with d axis saturation suitable for transient analysis, • develop a mathematical model for a voltage regulator and excitation system using the IEEE AC8B voltage regulator and excitation system template, • develop mathematical models for an energy storage primary control system, LC filter and transformer suitable for transient analysis, • combine the generator and energy storage models in a micro-grid context, • develop mathematical models for electric system components in the stationary abc frame and rotating dq reference frame, • develop a secondary control network for dispatch of micro-grid assets, • establish micro-grid limits of stable operation for step changes in load and power commands based on simulations of model data assuming net load on the micro-grid, and • use generator and electric system models to assess the generator current magnitude during phase-to-ground faults.

  6. Building energy analysis tool

    DOEpatents

    Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars

    2016-04-12

    A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.

  7. Next generation of weather generators on web service framework

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.

    2016-12-01

    Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.

  8. Developing Formal Object-oriented Requirements Specifications: A Model, Tool and Technique.

    ERIC Educational Resources Information Center

    Jackson, Robert B.; And Others

    1995-01-01

    Presents a formal object-oriented specification model (OSS) for computer software system development that is supported by a tool that automatically generates a prototype from an object-oriented analysis model (OSA) instance, lets the user examine the prototype, and permits the user to refine the OSA model instance to generate a requirements…

  9. A MECHANISTIC MODEL FOR MERCURY CAPTURE WITH IN-SITU GENERATED TITANIA PARTICLES: ROLE OF WATER VAPOR

    EPA Science Inventory

    A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...

  10. Generative Computer-Assisted Instruction and Artificial Intelligence. Report No. 5.

    ERIC Educational Resources Information Center

    Sinnott, Loraine T.

    This paper reviews the state-of-the-art in generative computer-assisted instruction and artificial intelligence. It divides relevant research into three areas of instructional modeling: models of the subject matter; models of the learner's state of knowledge; and models of teaching strategies. Within these areas, work sponsored by Advanced…

  11. DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OGDEN DM; KIRCH NW

    2007-10-31

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.

  12. Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Comiskey, J. G.

    1979-01-01

    This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.

  13. Reliability models: the influence of model specification in generation expansion planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stremel, J.P.

    1982-10-01

    This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less

  14. A random spatial network model based on elementary postulates

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  15. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  16. The influence of initial and surface boundary conditions on a model-generated January climatology

    NASA Technical Reports Server (NTRS)

    Wu, K. F.; Spar, J.

    1981-01-01

    The influence on a model-generated January climate of various surface boundary conditions, as well as initial conditions, was studied by using the GISS coarse-mesh climate model. Four experiments - two with water planets, one with flat continents, and one with mountains - were used to investigate the effects of initial conditions, and the thermal and dynamical effects of the surface on the model generated-climate. However, climatological mean zonal-symmetric sea surface temperature is used in all four runs over the model oceans. Moreover, zero ground wetness and uniform ground albedo except for snow are used in the last experiments.

  17. Numerical investigation of wake-collapse internal waves generated by a submerged moving body

    NASA Astrophysics Data System (ADS)

    Liang, Jianjun; Du, Tao; Huang, Weigen; He, Mingxia

    2017-07-01

    The state-of-the-art OpenFOAM technology is used to develop a numerical model that can be devoted to numerically investigating wake-collapse internal waves generated by a submerged moving body. The model incorporates body geometry, propeller forcing, and stratification magnitude of seawater. The generation mechanism and wave properties are discussed based on model results. It was found that the generation of the wave and its properties depend greatly on the body speed. Only when that speed exceeds some critical value, between 1.5 and 4.5 m/s, can the moving body generate wake-collapse internal waves, and with increases of this speed, the time of generation advances and wave amplitude increases. The generated wake-collapse internal waves are confirmed to have characteristics of the second baroclinic mode. As the body speed increases, wave amplitude and length increase and its waveform tends to take on a regular sinusoidal shape. For three linearly temperature-stratified profiles examined, the weaker the stratification, the stronger the wake-collapse internal wave.

  18. Testing the vulnerability and scar models of self-esteem and depressive symptoms from adolescence to middle adulthood and across generations.

    PubMed

    Steiger, Andrea E; Fend, Helmut A; Allemand, Mathias

    2015-02-01

    The vulnerability model states that low self-esteem functions as a predictor for the development of depressive symptoms whereas the scar model assumes that these symptoms leave scars in individuals resulting in lower self-esteem. Both models have received empirical support, however, they have only been tested within individuals and not across generations (i.e., between family members). Thus, we tested the scope of these competing models by (a) investigating whether the effects hold from adolescence to middle adulthood (long-term vulnerability and scar effects), (b) whether the effects hold across generations (intergenerational vulnerability and scar effects), and (c) whether intergenerational effects are mediated by parental self-esteem and depressive symptoms and parent-child discord. We used longitudinal data from adolescence to middle adulthood (N = 1,359) and from Generation 1 adolescents (G1) to Generation 2 adolescents (G2) (N = 572 parent-child pairs). Results from latent cross-lagged regression analyses demonstrated that both adolescent self-esteem and depressive symptoms were prospectively related to adult self-esteem and depressive symptoms 3 decades later. That is, both the vulnerability and scar models are valid over decades with stronger effects for the vulnerability model. Across generations, we found a substantial direct transmission effect from G1 to G2 adolescent depressive symptoms but no evidence for the proposed intergenerational vulnerability and scar effect or for any of the proposed mediating mechanisms. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  19. Virtual reality and consciousness inference in dreaming

    PubMed Central

    Hobson, J. Allan; Hong, Charles C.-H.; Friston, Karl J.

    2014-01-01

    This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that – through experience-dependent plasticity – becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep – and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain’s generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis – evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research. PMID:25346710

  20. Virtual reality and consciousness inference in dreaming.

    PubMed

    Hobson, J Allan; Hong, Charles C-H; Friston, Karl J

    2014-01-01

    This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.

  1. Temporal Cyber Attack Detection.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, Joey Burton; Draelos, Timothy J.; Galiardi, Meghan

    Rigorous characterization of the performance and generalization ability of cyber defense systems is extremely difficult, making it hard to gauge uncertainty, and thus, confidence. This difficulty largely stems from a lack of labeled attack data that fully explores the potential adversarial space. Currently, performance of cyber defense systems is typically evaluated in a qualitative manner by manually inspecting the results of the system on live data and adjusting as needed. Additionally, machine learning has shown promise in deriving models that automatically learn indicators of compromise that are more robust than analyst-derived detectors. However, to generate these models, most algorithms requiremore » large amounts of labeled data (i.e., examples of attacks). Algorithms that do not require annotated data to derive models are similarly at a disadvantage, because labeled data is still necessary when evaluating performance. In this work, we explore the use of temporal generative models to learn cyber attack graph representations and automatically generate data for experimentation and evaluation. Training and evaluating cyber systems and machine learning models requires significant, annotated data, which is typically collected and labeled by hand for one-off experiments. Automatically generating such data helps derive/evaluate detection models and ensures reproducibility of results. Experimentally, we demonstrate the efficacy of generative sequence analysis techniques on learning the structure of attack graphs, based on a realistic example. These derived models can then be used to generate more data. Additionally, we provide a roadmap for future research efforts in this area.« less

  2. Experimental investigation of powerful pulse current generators based on capacitive storage and explosive magnetic generators

    NASA Astrophysics Data System (ADS)

    Shurupov, A. V.; Zavalova, V. E.; Kozlov, A. V.; Shurupov, M. A.; Povareshkin, M. N.; Kozlov, A. A.; Shurupova, N. P.

    2018-01-01

    Experimental models of microsecond duration powerful generators of current pulses on the basis of explosive magnetic generators and voltage impulse generator have been developed for the electromagnetic pulse effects on energy facilities to verify their stability. Exacerbation of voltage pulse carried out through the use of electro explosive current interrupter made of copper wires with diameters of 80 and 120 μm. Experimental results of these models investigation are represented. Voltage fronts about 100 ns and the electric field strength of 800 kV/m are registered.

  3. Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?

    PubMed Central

    Reichert, David P.; Seriès, Peggy; Storkey, Amos J.

    2013-01-01

    Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain. PMID:23874177

  4. Comparison of L-system applications towards plant modelling, music rendering and score generation using visual language programming

    NASA Astrophysics Data System (ADS)

    Lim, Chen Kim; Tan, Kian Lam; Yusran, Hazwanni; Suppramaniam, Vicknesh

    2017-10-01

    Visual language or visual representation has been used in the past few years in order to express the knowledge in graphic. One of the important graphical elements is fractal and L-Systems is a mathematic-based grammatical model for modelling cell development and plant topology. From the plant model, L-Systems can be interpreted as music sound and score. In this paper, LSound which is a Visual Language Programming (VLP) framework has been developed to model plant to music sound and generate music score and vice versa. The objectives of this research has three folds: (i) To expand the grammar dictionary of L-Systems music based on visual programming, (ii) To design and produce a user-friendly and icon based visual language framework typically for L-Systems musical score generation which helps the basic learners in musical field and (iii) To generate music score from plant models and vice versa using L-Systems method. This research undergoes a four phases methodology where the plant is first modelled, then the music is interpreted, followed by the output of music sound through MIDI and finally score is generated. LSound is technically compared to other existing applications in the aspects of the capability of modelling the plant, rendering the music and generating the sound. It has been found that LSound is a flexible framework in which the plant can be easily altered through arrow-based programming and the music score can be altered through the music symbols and notes. This work encourages non-experts to understand L-Systems and music hand-in-hand.

  5. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  6. A New Model that Generates Lotka's Law.

    ERIC Educational Resources Information Center

    Huber, John C.

    2002-01-01

    Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)

  7. A simple stochastic weather generator for ecological modeling

    Treesearch

    A.G. Birt; M.R. Valdez-Vivas; R.M. Feldman; C.W. Lafon; D. Cairns; R.N. Coulson; M. Tchakerian; W. Xi; Jim Guldin

    2010-01-01

    Stochastic weather generators are useful tools for exploring the relationship between organisms and their environment. This paper describes a simple weather generator that can be used in ecological modeling projects. We provide a detailed description of methodology, and links to full C++ source code (http://weathergen.sourceforge.net) required to implement or modify...

  8. Using the Kaleidoscope Career Model to Examine Generational Differences in Work Attitudes

    ERIC Educational Resources Information Center

    Sullivan, Sherry E.; Forret, Monica L.; Carraher, Shawn M.; Mainiero, Lisa A.

    2009-01-01

    Purpose: The purpose of this paper is to examine, utilising the Kaleidoscope Career Model, whether members of the Baby Boom generation and Generation X differ in their needs for authenticity, balance, and challenge. Design/methodology/approach: Survey data were obtained from 982 professionals located across the USA. Correlations, t-tests, and…

  9. Reflections on Wittrock's Generative Model of Learning: A Motivation Perspective

    ERIC Educational Resources Information Center

    Anderman, Eric M.

    2010-01-01

    In this article, I examine developments in research on achievement motivation and comment on how those developments are reflected in Wittrock's generative model of learning. Specifically, I focus on the roles of prior knowledge, the generation of knowledge, and beliefs about ability. Examples from Wittrock's theory and from current motivational…

  10. An efficient and scalable graph modeling approach for capturing information at different levels in next generation sequencing reads

    PubMed Central

    2013-01-01

    Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333

  11. A simple topography-driven, calibration-free runoff generation model

    NASA Astrophysics Data System (ADS)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.

  12. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  13. Simulation of Ectopic Pacemakers in the Heart: Multiple Ectopic Beats Generated by Reentry inside Fibrotic Regions

    PubMed Central

    Gouvêa de Barros, Bruno; Weber dos Santos, Rodrigo; Alonso, Sergio

    2015-01-01

    The inclusion of nonconducting media, mimicking cardiac fibrosis, in two models of cardiac tissue produces the formation of ectopic beats. The fraction of nonconducting media in comparison with the fraction of healthy myocytes and the topological distribution of cells determines the probability of ectopic beat generation. First, a detailed subcellular microscopic model that accounts for the microstructure of the cardiac tissue is constructed and employed for the numerical simulation of action potential propagation. Next, an equivalent discrete model is implemented, which permits a faster integration of the equations. This discrete model is a simplified version of the microscopic model that maintains the distribution of connections between cells. Both models produce similar results when describing action potential propagation in homogeneous tissue; however, they slightly differ in the generation of ectopic beats in heterogeneous tissue. Nevertheless, both models present the generation of reentry inside fibrotic tissues. This kind of reentry restricted to microfibrosis regions can result in the formation of ectopic pacemakers, that is, regions that will generate a series of ectopic stimulus at a fast pacing rate. In turn, such activity has been related to trigger fibrillation in the atria and in the ventricles in clinical and animal studies. PMID:26583127

  14. Coupling of electromagnetic and structural dynamics for a wind turbine generator

    NASA Astrophysics Data System (ADS)

    Matzke, D.; Rick, S.; Hollas, S.; Schelenz, R.; Jacobs, G.; Hameyer, K.

    2016-09-01

    This contribution presents a model interface of a wind turbine generator to represent the reciprocal effects between the mechanical and the electromagnetic system. Therefore, a multi-body-simulation (MBS) model in Simpack is set up and coupled with a quasi-static electromagnetic (EM) model of the generator in Matlab/Simulink via co-simulation. Due to lack of data regarding the structural properties of the generator the modal properties of the MBS model are fitted with respect to results of an experimental modal analysis (EMA) on the reference generator. The used method and the results of this approach are presented in this paper. The MB S model and the interface are set up in such a way that the EM forces can be applied to the structure and the response of the structure can be fed back to the EM model. The results of this cosimulation clearly show an influence of the feedback of the mechanical response which is mainly damping in the torsional degree of freedom and effects due to eccentricity in radial direction. The accuracy of these results will be validated via test bench measurements and presented in future work. Furthermore it is suggested that the EM model should be adjusted in future works so that transient effects are represented.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sentis, Manuel Lorenzo; Gable, Carl W.

    Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less

  16. Temperature and petroleum generation history of the Wilcox Formation, Louisiana

    USGS Publications Warehouse

    Pitman, Janet K.; Rowan, Elisabeth Rowan

    2012-01-01

    A one-dimensional petroleum system modeling study of Paleogene source rocks in Louisiana was undertaken in order to characterize their thermal history and to establish the timing and extent of petroleum generation. The focus of the modeling study was the Paleocene and Eocene Wilcox Formation, which contains the youngest source rock interval in the Gulf Coast Province. Stratigraphic input to the models included thicknesses and ages of deposition, lithologies, amounts and ages of erosion, and ages for periods of nondeposition. Oil-generation potential of the Wilcox Formation was modeled using an initial total organic carbon of 2 weight percent and an initial hydrogen index of 261 milligrams of hydrocarbon per grams of total organic carbon. Isothermal, hydrous-pyrolysis kinetics determined experimentally was used to simulate oil generation from coal, which is the primary source of oil in Eocene rocks. Model simulations indicate that generation of oil commenced in the Wilcox Formation during a fairly wide age range, from 37 million years ago to the present day. Differences in maturity with respect to oil generation occur across the Lower Cretaceous shelf edge. Source rocks that are thermally immature and have not generated oil (depths less than about 5,000 feet) lie updip and north of the shelf edge; source rocks that have generated all of their oil and are overmature (depths greater than about 13,000 feet) are present downdip and south of the shelf edge. High rates of sediment deposition coupled with increased accommodation space at the Cretaceous shelf margin led to deep burial of Cretaceous and Tertiary source rocks and, in turn, rapid generation of petroleum and, ultimately, cracking of oil to gas.

  17. Models for the transient stability of conventional power generating stations connected to low inertia systems

    NASA Astrophysics Data System (ADS)

    Zarifakis, Marios; Coffey, William T.; Kalmykov, Yuri P.; Titov, Sergei V.

    2017-06-01

    An ever-increasing requirement to integrate greater amounts of electrical energy from renewable sources especially from wind turbines and solar photo-voltaic installations exists and recent experience in the island of Ireland demonstrates that this requirement influences the behaviour of conventional generating stations. One observation is the change in the electrical power output of synchronous generators following a transient disturbance especially their oscillatory behaviour accompanied by similar oscillatory behaviour of the grid frequency, both becoming more pronounced with reducing grid inertia. This behaviour cannot be reproduced with existing mathematical models indicating that an understanding of the behaviour of synchronous generators, subjected to various disturbances especially in a system with low inertia requires a new modelling technique. Thus two models of a generating station based on a double pendulum described by a system of coupled nonlinear differential equations and suitable for analysis of its stability corresponding to infinite or finite grid inertia are presented. Formal analytic solutions of the equations of motion are given and compared with numerical solutions. In particular the new finite grid model will allow one to identify limitations to the operational range of the synchronous generators used in conventional power generation and also to identify limits, such as the allowable Rate of Change of Frequency which is currently set to ± 0.5 Hz/s and is a major factor in describing the volatility of a grid as well as identifying requirements to the total inertia necessary, which is currently provided by conventional power generators only, thus allowing one to maximise the usage of grid connected non-synchronous generators, e.g., wind turbines and solar photo-voltaic installations.

  18. Domain modeling and grid generation for multi-block structured grids with application to aerodynamic and hydrodynamic configurations

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.

    1992-01-01

    About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.

  19. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  20. Evaluation of gravitational gradients generated by Earth's crustal structures

    NASA Astrophysics Data System (ADS)

    Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad

    2013-02-01

    Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.

  1. Integrated Mode Choice, Small Aircraft Demand, and Airport Operations Model User's Guide

    NASA Technical Reports Server (NTRS)

    Yackovetsky, Robert E. (Technical Monitor); Dollyhigh, Samuel M.

    2004-01-01

    A mode choice model that generates on-demand air travel forecasts at a set of GA airports based on changes in economic characteristics, vehicle performance characteristics such as speed and cost, and demographic trends has been integrated with a model to generate itinerate aircraft operations by airplane category at a set of 3227 airports. Numerous intermediate outputs can be generated, such as the number of additional trips diverted from automobiles and schedule air by the improved performance and cost of on-demand air vehicles. The total number of transported passenger miles that are diverted is also available. From these results the number of new aircraft to service the increased demand can be calculated. Output from the models discussed is in the format to generate the origin and destination traffic flow between the 3227 airports based on solutions to a gravity model.

  2. Assessing CO2 Mitigation Options Utilizing Detailed Electricity Characteristics and Including Renewable Generation

    NASA Astrophysics Data System (ADS)

    Bensaida, K.; Alie, Colin; Elkamel, A.; Almansoori, A.

    2017-08-01

    This paper presents a novel techno-economic optimization model for assessing the effectiveness of CO2 mitigation options for the electricity generation sub-sector that includes renewable energy generation. The optimization problem was formulated as a MINLP model using the GAMS modeling system. The model seeks the minimization of the power generation costs under CO2 emission constraints by dispatching power from low CO2 emission-intensity units. The model considers the detailed operation of the electricity system to effectively assess the performance of GHG mitigation strategies and integrates load balancing, carbon capture and carbon taxes as methods for reducing CO2 emissions. Two case studies are discussed to analyze the benefits and challenges of the CO2 reduction methods in the electricity system. The proposed mitigations options would not only benefit the environment, but they will as well improve the marginal cost of producing energy which represents an advantage for stakeholders.

  3. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  4. Trip generation characteristics of special generators

    DOT National Transportation Integrated Search

    2010-03-01

    Special generators are introduced in the sequential four-step modeling procedure to represent certain types of facilities whose trip generation characteristics are not fully captured by the standard trip generation module. They are also used in the t...

  5. Generation capacity expansion planning in deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak

    With increasing demand of electric power in the context of deregulated electricity markets, a good strategic planning for the growth of the power system is critical for our tomorrow. There is a need to build new resources in the form of generation plants and transmission lines while considering the effects of these new resources on power system operations, market economics and the long-term dynamics of the economy. In deregulation, the exercise of generation planning has undergone a paradigm shift. The first stage of generation planning is now undertaken by the individual investors. These investors see investments in generation capacity as an increasing business opportunity because of the increasing market prices. Therefore, the main objective of such a planning exercise, carried out by individual investors, is typically that of long-term profit maximization. This thesis presents some modeling frameworks for generation capacity expansion planning applicable to independent investor firms in the context of power industry deregulation. These modeling frameworks include various technical and financing issues within the process of power system planning. The proposed modeling frameworks consider the long-term decision making process of investor firms, the discrete nature of generation capacity addition and incorporates transmission network modeling. Studies have been carried out to examine the impact of the optimal investment plans on transmission network loadings in the long-run by integrating the generation capacity expansion planning framework within a modified IEEE 30-bus transmission system network. The work assesses the importance of arriving at an optimal IRR at which the firm's profit maximization objective attains an extremum value. The mathematical model is further improved to incorporate binary variables while considering discrete unit sizes, and subsequently to include the detailed transmission network representation. The proposed models are novel in the sense that the planning horizon is split into plan sub-periods so as to minimize the overall risks associated with long-term plan models, particularly in the context of deregulation.

  6. Model for Increasing the Power Obtained from a Thermoelectric Generator Module

    NASA Astrophysics Data System (ADS)

    Huang, Gia-Yeh; Hsu, Cheng-Ting; Yao, Da-Jeng

    2014-06-01

    We have developed a model for finding the most efficient way of increasing the power obtained from a thermoelectric generator (TEG) module with a variety of operating conditions and limitations. The model is based on both thermoelectric principles and thermal resistance circuits, because a TEG converts heat into electricity consistent with these two theories. It is essential to take into account thermal contact resistance when estimating power generation. Thermal contact resistance causes overestimation of the measured temperature difference between the hot and cold sides of a TEG in calculation of the theoretical power generated, i.e. the theoretical power is larger than the experimental power. The ratio of the experimental open-loop voltage to the measured temperature difference, the effective Seebeck coefficient, can be used to estimate the thermal contact resistance in the model. The ratio of the effective Seebeck coefficient to the theoretical Seebeck coefficient, the Seebeck coefficient ratio, represents the contact conditions. From this ratio, a relationship between performance and different variables can be developed. The measured power generated by a TEG module (TMH400302055; Wise Life Technology, Taiwan) is consistent with the result obtained by use of the model; the relative deviation is 10%. Use of this model to evaluate the most efficient means of increasing the generated power reveals that the TEG module generates 0.14 W when the temperature difference is 25°C and the Seebeck coefficient ratio is 0.4. Several methods can be used triple the amount of power generated. For example, increasing the temperature difference to 43°C generates 0.41 W power; improving the Seebeck coefficient ratio to 0.65 increases the power to 0.39 W; simultaneously increasing the temperature difference to 34°C and improving the Seebeck coefficient ratio to 0.5 increases the power to 0.41 W. Choice of the appropriate method depends on the limitations of system, the cost, and the environment.

  7. A trajectory generation and system characterization model for cislunar low-thrust spacecraft. Volume 2: Technical manual

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David J.; Pinon, Elfego, III; Oconnor, Brendan M.; Bilby, Curt R.

    1990-01-01

    The documentation of the Trajectory Generation and System Characterization Model for the Cislunar Low-Thrust Spacecraft is presented in Technical and User's Manuals. The system characteristics and trajectories of low thrust nuclear electric propulsion spacecraft can be generated through the use of multiple system technology models coupled with a high fidelity trajectory generation routine. The Earth to Moon trajectories utilize near Earth orbital plane alignment, midcourse control dependent upon the spacecraft's Jacobian constant, and capture to target orbit utilizing velocity matching algorithms. The trajectory generation is performed in a perturbed two-body equinoctial formulation and the restricted three-body formulation. A single control is determined by the user for the interactive midcourse portion of the trajectory. The full spacecraft system characteristics and trajectory are provided as output.

  8. Simulation of dense amorphous polymers by generating representative atomistic models

    NASA Astrophysics Data System (ADS)

    Curcó, David; Alemán, Carlos

    2003-08-01

    A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.

  9. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  10. Generate an Argument: An Instructional Model

    ERIC Educational Resources Information Center

    Sampson, Victor; Grooms, Jonathon

    2010-01-01

    The Generate an Argument instructional model was designed to engage students in scientific argumentation. By using this model, students develop complex reasoning and critical-thinking skills, understand the nature and development of scientific knowledge, and improve their communication skills (Duschl and Osborne 2002). This article describes the…

  11. Generation of transgenic mouse model using PTTG as an oncogene.

    PubMed

    Kakar, Sham S; Kakar, Cohin

    2015-01-01

    The close physiological similarity between the mouse and human has provided tools to understanding the biological function of particular genes in vivo by introduction or deletion of a gene of interest. Using a mouse as a model has provided a wealth of resources, knowledge, and technology, helping scientists to understand the biological functions, translocation, trafficking, and interaction of a candidate gene with other intracellular molecules, transcriptional regulation, posttranslational modification, and discovery of novel signaling pathways for a particular gene. Most importantly, the generation of the mouse model for a specific human disease has provided a powerful tool to understand the etiology of a disease and discovery of novel therapeutics. This chapter describes in detail the step-by-step generation of the transgenic mouse model, which can be helpful in guiding new investigators in developing successful models. For practical purposes, we will describe the generation of a mouse model using pituitary tumor transforming gene (PTTG) as the candidate gene of interest.

  12. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  13. Distributed state-space generation of discrete-state stochastic models

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David

    1995-01-01

    High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.

  14. Generative Models of Segregation: Investigating Model-Generated Patterns of Residential Segregation by Ethnicity and Socioeconomic Status

    PubMed Central

    Fossett, Mark

    2011-01-01

    This paper considers the potential for using agent models to explore theories of residential segregation in urban areas. Results of generative experiments conducted using an agent-based simulation of segregation dynamics document that varying a small number of model parameters representing constructs from urban-ecological theories of segregation can generate a wide range of qualitatively distinct and substantively interesting segregation patterns. The results suggest how complex, macro-level patterns of residential segregation can arise from a small set of simple micro-level social dynamics operating within particular urban-demographic contexts. The promise and current limitations of agent simulation studies are noted and optimism is expressed regarding the potential for such studies to engage and contribute to the broader research literature on residential segregation. PMID:21379372

  15. Next-generation genome-scale models for metabolic engineering.

    PubMed

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Application of spatial and non-spatial data analysis in determination of the factors that impact municipal solid waste generation rates in Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keser, Saniye; Duzgun, Sebnem; Department of Geodetic and Geographic Information Technologies, Middle East Technical University, 06800 Ankara

    Highlights: Black-Right-Pointing-Pointer Spatial autocorrelation exists in municipal solid waste generation rates for different provinces in Turkey. Black-Right-Pointing-Pointer Traditional non-spatial regression models may not provide sufficient information for better solid waste management. Black-Right-Pointing-Pointer Unemployment rate is a global variable that significantly impacts the waste generation rates in Turkey. Black-Right-Pointing-Pointer Significances of global parameters may diminish at local scale for some provinces. Black-Right-Pointing-Pointer GWR model can be used to create clusters of cities for solid waste management. - Abstract: In studies focusing on the factors that impact solid waste generation habits and rates, the potential spatial dependency in solid waste generation datamore » is not considered in relating the waste generation rates to its determinants. In this study, spatial dependency is taken into account in determination of the significant socio-economic and climatic factors that may be of importance for the municipal solid waste (MSW) generation rates in different provinces of Turkey. Simultaneous spatial autoregression (SAR) and geographically weighted regression (GWR) models are used for the spatial data analyses. Similar to ordinary least squares regression (OLSR), regression coefficients are global in SAR model. In other words, the effect of a given independent variable on a dependent variable is valid for the whole country. Unlike OLSR or SAR, GWR reveals the local impact of a given factor (or independent variable) on the waste generation rates of different provinces. Results show that provinces within closer neighborhoods have similar MSW generation rates. On the other hand, this spatial autocorrelation is not very high for the exploratory variables considered in the study. OLSR and SAR models have similar regression coefficients. GWR is useful to indicate the local determinants of MSW generation rates. GWR model can be utilized to plan waste management activities at local scale including waste minimization, collection, treatment, and disposal. At global scale, the MSW generation rates in Turkey are significantly related to unemployment rate and asphalt-paved roads ratio. Yet, significances of these variables may diminish at local scale for some provinces. At local scale, different factors may be important in affecting MSW generation rates.« less

  17. Sociality influences cultural complexity.

    PubMed

    Muthukrishna, Michael; Shulman, Ben W; Vasilescu, Vlad; Henrich, Joseph

    2014-01-07

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution.

  18. Sociality influences cultural complexity

    PubMed Central

    Muthukrishna, Michael; Shulman, Ben W.; Vasilescu, Vlad; Henrich, Joseph

    2014-01-01

    Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution. PMID:24225461

  19. Dark matter stability and one-loop neutrino mass generation based on Peccei-Quinn symmetry

    NASA Astrophysics Data System (ADS)

    Suematsu, Daijiro

    2018-01-01

    We propose a model which is a simple extension of the KSVZ invisible axion model with an inert doublet scalar. Peccei-Quinn symmetry forbids tree-level neutrino mass generation and its remnant Z_2 symmetry guarantees dark matter stability. The neutrino masses are generated by one-loop effects as a result of the breaking of Peccei-Quinn symmetry through a nonrenormalizable interaction. Although the low energy effective model coincides with an original scotogenic model which contains right-handed neutrinos with large masses, it is free from the strong CP problem.

  20. Partial automation of database processing of simulation outputs from L-systems models of plant morphogenesis.

    PubMed

    Chen, Yi- Ping Phoebe; Hanan, Jim

    2002-01-01

    Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly.

  1. BOREAS TE-17 Production Efficiency Model Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G.; Papagno, Andrea (Editor); Goetz, Scott J.; Goward, Samual N.; Prince, Stephen D.; Czajkowski, Kevin; Dubayah, Ralph O.

    2000-01-01

    A Boreal Ecosystem-Atmospheric Study (BOREAS) version of the Global Production Efficiency Model (http://www.inform.umd.edu/glopem/) was developed by TE-17 (Terrestrial Ecology) to generate maps of gross and net primary production, autotrophic respiration, and light use efficiency for the BOREAS region. This document provides basic information on the model and how the maps were generated. The data generated by the model are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  2. A cortical integrate-and-fire neural network model for blind decoding of visual prosthetic stimulation.

    PubMed

    Eiber, Calvin D; Morley, John W; Lovell, Nigel H; Suaning, Gregg J

    2014-01-01

    We present a computational model of the optic pathway which has been adapted to simulate cortical responses to visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional visual neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.

  3. Using Model-Based Systems Engineering to Provide Artifacts for NASA Project Life-cycle and Technical Reviews

    NASA Technical Reports Server (NTRS)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    This paper is for the AIAA Space Conference. The ability of systems engineers to use model-based systems engineering (MBSE) to generate self-consistent, up-to-date systems engineering products for project life-cycle and technical reviews is an important aspect for the continued and accelerated acceptance of MBSE. Currently, many review products are generated using labor-intensive, error-prone approaches based on documents, spreadsheets, and chart sets; a promised benefit of MBSE is that users will experience reductions in inconsistencies and errors. This work examines features of SysML that can be used to generate systems engineering products. Model elements, relationships, tables, and diagrams are identified for a large number of the typical systems engineering artifacts. A SysML system model can contain and generate most systems engineering products to a significant extent and this paper provides a guide on how to use MBSE to generate products for project life-cycle and technical reviews. The use of MBSE can reduce the schedule impact usually experienced for review preparation, as in many cases the review products can be auto-generated directly from the system model. These approaches are useful to systems engineers, project managers, review board members, and other key project stakeholders.

  4. Patterns of waste generation: A gradient boosting model for short-term waste prediction in New York City.

    PubMed

    Johnson, Nicholas E; Ianiuk, Olga; Cazap, Daniel; Liu, Linglan; Starobin, Daniel; Dobler, Gregory; Ghandehari, Masoud

    2017-04-01

    Historical municipal solid waste (MSW) collection data supplied by the New York City Department of Sanitation (DSNY) was used in conjunction with other datasets related to New York City to forecast municipal solid waste generation across the city. Spatiotemporal tonnage data from the DSNY was combined with external data sets, including the Longitudinal Employer Household Dynamics data, the American Community Survey, the New York City Department of Finance's Primary Land Use and Tax Lot Output data, and historical weather data to build a Gradient Boosting Regression Model. The model was trained on historical data from 2005 to 2011 and validation was performed both temporally and spatially. With this model, we are able to accurately (R2>0.88) forecast weekly MSW generation tonnages for each of the 232 geographic sections in NYC across three waste streams of refuse, paper and metal/glass/plastic. Importantly, the model identifies regularity of urban waste generation and is also able to capture very short timescale fluctuations associated to holidays, special events, seasonal variations, and weather related events. This research shows New York City's waste generation trends and the importance of comprehensive data collection (especially weather patterns) in order to accurately predict waste generation. Copyright © 2017. Published by Elsevier Ltd.

  5. NOAA's weather forecasts go hyper-local with next-generation weather

    Science.gov Websites

    model NOAA HOME WEATHER OCEANS FISHERIES CHARTING SATELLITES CLIMATE RESEARCH COASTS CAREERS with next-generation weather model New model will help forecasters predict a storm's path, timing and intensity better than ever September 30, 2014 This is a comparison of two weather forecast models looking

  6. Implementing subgrid-scale cloudiness into the Model for Prediction Across Scales-Atmosphere (MPAS-A) for next generation global air quality modeling

    EPA Science Inventory

    A next generation air quality modeling system is being developed at the U.S. EPA to enable seamless modeling of air quality from global to regional to (eventually) local scales. State of the science chemistry and aerosol modules from the Community Multiscale Air Quality (CMAQ) mo...

  7. Influence of thermodynamic properties of a thermo-acoustic emitter on the efficiency of thermal airborne ultrasound generation.

    PubMed

    Daschewski, M; Kreutzbruck, M; Prager, J

    2015-12-01

    In this work we experimentally verify the theoretical prediction of the recently published Energy Density Fluctuation Model (EDF-model) of thermo-acoustic sound generation. Particularly, we investigate experimentally the influence of thermal inertia of an electrically conductive film on the efficiency of thermal airborne ultrasound generation predicted by the EDF-model. Unlike widely used theories, the EDF-model predicts that the thermal inertia of the electrically conductive film is a frequency-dependent parameter. Its influence grows non-linearly with the increase of excitation frequency and reduces the efficiency of the ultrasound generation. Thus, this parameter is the major limiting factor for the efficient thermal airborne ultrasound generation in the MHz-range. To verify this theoretical prediction experimentally, five thermo-acoustic emitter samples consisting of Indium-Tin-Oxide (ITO) coatings of different thicknesses (from 65 nm to 1.44 μm) on quartz glass substrates were tested for airborne ultrasound generation in a frequency range from 10 kHz to 800 kHz. For the measurement of thermally generated sound pressures a laser Doppler vibrometer combined with a 12 μm thin polyethylene foil was used as the sound pressure detector. All tested thermo-acoustic emitter samples showed a resonance-free frequency response in the entire tested frequency range. The thermal inertia of the heat producing film acts as a low-pass filter and reduces the generated sound pressure with the increasing excitation frequency and the ITO film thickness. The difference of generated sound pressure levels for samples with 65 nm and 1.44 μm thickness is in the order of about 6 dB at 50 kHz and of about 12 dB at 500 kHz. A comparison of sound pressure levels measured experimentally and those predicted by the EDF-model shows for all tested emitter samples a relative error of less than ±6%. Thus, experimental results confirm the prediction of the EDF-model and show that the model can be applied for design and optimization of thermo-acoustic airborne ultrasound emitters. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Capacity withholding in wholesale electricity markets: The experience in England and Wales

    NASA Astrophysics Data System (ADS)

    Quinn, James Arnold

    This thesis examines the incentives wholesale electricity generators face to withhold generating capacity from centralized electricity spot markets. The first chapter includes a brief history of electricity industry regulation in England and Wales and in the United States, including a description of key institutional features of England and Wales' restructured electricity market. The first chapter also includes a review of the literature on both bid price manipulation and capacity bid manipulation in centralized electricity markets. The second chapter details a theoretical model of wholesale generator behavior in a single price electricity market. A duopoly model is specified under the assumption that demand is non-stochastic. This model assumes that duopoly generators offer to sell electricity at their marginal cost, but can withhold a continuous segment of their capacity from the market. The Nash equilibrium withholding strategy of this model involves each duopoly generator withholding so that it produces the Cournot equilibrium output. A monopoly model along the lines of the duopoly model is specified and simulated under the assumption that demand is stochastic. The optimal strategy depends on the degree of demand uncertainty. When there is a moderate degree of demand uncertainty, the optimal withholding strategy involves production inefficiencies. When there is a high degree of demand uncertainty, the optimal monopoly quantity is greater than the optimal output level when demand is non-stochastic. The third chapter contains an empirical examination of the behavior of generators in the wholesale electricity market in England and Wales in the early 1990's. The wholesale market in England and Wales is analyzed because the industry structure in the early 1990's created a natural experiment, which is described in this chapter, whereby one of the two dominant generators had no incentive to behave non-competitively. This chapter develops a classification methodology consistent with the equilibrium identified in the second chapter. The availability of generating units owned by the two dominant generators is analyzed based on this classification system. This analysis includes the use of sample statistics as well as estimates from a dynamic random effects probit model. The analysis suggests a minimal degree of capacity withholding.

  9. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  10. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  11. Identifying Androgen Receptor-Independent Mechanisms of Prostate Cancer Resistance to Second-Generation Antiandrogen Therapy

    DTIC Science & Technology

    2016-08-01

    expanded upon the relationship between GR and SGK1 in the context of enzalutamide-driven prostate cancer. We have generated CRISPR /Cas9 cell lines...Complete Generate SGK1 overexpressing cell models 50% Ongoing Clone SGK1 CRISPR 100% Complete Generate SGK1-deficient cell models 75% Ongoing Test...driven enzalutamide resistance, GR-expressing enzalutamide-resistant prostate cancer cells expressing CRISPR /Cas9 and a guide targeting SGK1 (sgSGK1

  12. Event reweighting with the NuWro neutrino interaction generator

    NASA Astrophysics Data System (ADS)

    Pickering, Luke; Stowell, Patrick; Sobczyk, Jan

    2017-09-01

    Event reweighting has been implemented in the NuWro neutrino event generator for a number of free theory parameters in the interaction model. Event reweighting is a key analysis technique, used to efficiently study the effect of neutrino interaction model uncertainties. This opens up the possibility for NuWro to be used as a primary event generator by experimental analysis groups. A preliminary model tuning to ANL and BNL data of quasi-elastic and single pion production events was performed to validate the reweighting engine.

  13. Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    available SfM application known as VisualSFM .6,7 VisualSFM is an end-user, “off-the-shelf” implementation of SfM that is easy to configure and used for...most 3D model generation applications from imagery. While the usual interface with VisualSFM is through their graphical user interface (GUI), we will be...of our system.5 There are two types of 3D model generation available within VisualSFM ; sparse and dense reconstruction. Sparse reconstruction begins

  14. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  15. A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation--With Application to Tumor and Stroke.

    PubMed

    Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-04-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.

  16. The Generation of Three-Dimensional Body-Fitted Coordinate Systems for Viscous Flow Problems.

    DTIC Science & Technology

    1982-07-01

    Geometries," NASA TM X-3206, 1975. iq p] Papers Written Under The Contract 1. "Basic Differential Models For Coordinate Generation ", Z . U. A. Warsi...8217 Ii (C) (4’) p Figure 1. Coordinate Surfaces fr. I • BASIC DIFFERENTIAL MODELS FOR COORDINATE GENERATION Z . U. A. WARSI* Department of Aerospace

  17. Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)

    PubMed Central

    2010-01-01

    Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887

  18. Deformation analysis of rotary combustion engine housings

    NASA Technical Reports Server (NTRS)

    Vilmann, Carl

    1991-01-01

    This analysis of the deformation of rotary combustion engine housings targeted the following objectives: (1) the development and verification of a finite element model of the trochoid housing, (2) the prediction of the stress and deformation fields present within the trochoid housing during operating conditions, and (3) the development of a specialized preprocessor which would shorten the time necessary for mesh generation of a trochoid housing's FEM model from roughly one month to approximately two man hours. Executable finite element models were developed for both the Mazda and the Outboard Marine Corporation trochoid housings. It was also demonstrated that a preprocessor which would hasten the generation of finite element models of a rotary engine was possible to develop. The above objectives are treated in detail in the attached appendices. The first deals with finite element modeling of a Wankel engine center housing, and the second with the development of a preprocessor that generates finite element models of rotary combustion engine center housings. A computer program, designed to generate finite element models of user defined rotary combustion engine center housing geometries, is also included.

  19. Impact of metal ionic characteristics on adsorption potential of Ficus carica leaves using QSPR modeling.

    PubMed

    Batool, Fozia; Iqbal, Shahid; Akbar, Jamshed

    2018-04-03

    The present study describes Quantitative Structure Property Relationship (QSPR) modeling to relate metal ions characteristics with adsorption potential of Ficus carica leaves for 13 selected metal ions (Ca +2 , Cr +3 , Co +2 , Cu +2 , Cd +2 , K +1 , Mg +2 , Mn +2 , Na +1 , Ni +2 , Pb +2 , Zn +2 , and Fe +2 ) to generate QSPR model. A set of 21 characteristic descriptors were selected and relationship of these metal characteristics with adsorptive behavior of metal ions was investigated. Stepwise Multiple Linear Regression (SMLR) analysis and Artificial Neural Network (ANN) were applied for descriptors selection and model generation. Langmuir and Freundlich isotherms were also applied on adsorption data to generate proper correlation for experimental findings. Model generated indicated covalent index as the most significant descriptor, which is responsible for more than 90% predictive adsorption (α = 0.05). Internal validation of model was performed by measuring [Formula: see text] (0.98). The results indicate that present model is a useful tool for prediction of adsorptive behavior of different metal ions based on their ionic characteristics.

  20. QSPR models for predicting generator-column-derived octanol/water and octanol/air partition coefficients of polychlorinated biphenyls.

    PubMed

    Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-06-01

    Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Learning Generation: Fostering Innovation with Tomorrow's Teachers and Technology

    ERIC Educational Resources Information Center

    Aust, Ronald; Newberry, Brian; O'Brien, Joseph; Thomas, Jennifer

    2005-01-01

    We discuss the context, conception, implementation, and research used to refine and evaluate a systemic model for fostering technology integration in teacher education. The Learning Generation model identifies conditions where innovations for using technology emerge in small group dialogues. The model uses a multifaceted implementation with…

  2. 10 CFR 490.307 - Option for Electric Utilities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... affiliate, division, or business unit, whose principal business is generating, transmitting, importing, or... business unit, whose principal business is generating, transmitting, importing, or selling at wholesale or.... (2) 50 percent for model year 1999. (3) 70 percent for model year 2000. (4) 90 percent for model year...

  3. Software Tools for Weed Seed Germination Modeling

    USDA-ARS?s Scientific Manuscript database

    The next generation of weed seed germination models will need to account for variable soil microclimate conditions. In order to predict this microclimate environment we have developed a suite of individual tools (models) that can be used in conjunction with the next generation of weed seed germinati...

  4. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  5. Development of the Next Generation Air Quality Modeling System

    EPA Science Inventory

    A next generation air quality modeling system is being developed at the U.S. EPA to enable modeling of air quality from global to regional to (eventually) local scales. We envision that the system will have three configurations: 1. Global meteorology with seamless mesh refinemen...

  6. Vulnerability of US thermoelectric power generation to climate change when incorporating state-level environmental regulations

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Hejazi, Mohamad; Li, Hongyi; Forman, Barton; Zhang, Xiao

    2017-08-01

    Previous modelling studies suggest that thermoelectric power generation is vulnerable to climate change, whereas studies based on historical data suggest the impact will be less severe. Here we explore the vulnerability of thermoelectric power generation in the United States to climate change by coupling an Earth system model with a thermoelectric power generation model, including state-level representation of environmental regulations on thermal effluents. We find that the impact of climate change is lower than in previous modelling estimates due to an inclusion of a spatially disaggregated representation of environmental regulations and provisional variances that temporarily relieve power plants from permit requirements. More specifically, our results indicate that climate change alone may reduce average generating capacity by 2-3% by the 2060s, while reductions of up to 12% are expected if environmental requirements are enforced without waivers for thermal variation. Our work highlights the significance of accounting for legal constructs and underscores the effects of provisional variances in addition to environmental requirements.

  7. A simple model for the generation of the vestibular evoked myogenic potential (VEMP).

    PubMed

    Wit, Hero P; Kingma, Charlotte M

    2006-06-01

    To describe the mechanism by which the vestibular evoked myogenic potential is generated. Vestibular evoked myogenic potential generation is modeled by adding a large number of muscle motor unit action potentials. These action potentials occur randomly in time along a 100 ms long time axis. But because between approximately 15 and 20 ms after a loud short sound stimulus (almost) no action potentials are generated during VEMP measurements in human subjects, no action potentials are present in the model during this time. The evoked potential is the result of the lack of amplitude cancellation in the averaged surface electromyogram at the edges of this 5 ms long time interval. The relatively simple model describes generation and some properties of the vestibular evoked myogenic potential very well. It is shown that, in contrast with other evoked potentials (BAEPs, VERs), the vestibular evoked myogenic potential is the result of an interruption of activity and not that of summed synchronized neural action potentials.

  8. Regional stochastic generation of streamflows using an ARIMA (1,0,1) process and disaggregation

    USGS Publications Warehouse

    Armbruster, Jeffrey T.

    1979-01-01

    An ARIMA (1,0,1) model was calibrated and used to generate long annual flow sequences at three sites in the Juniata River basin, Pennsylvania. The model preserves the mean, variance, and cross correlations of the observed station data. In addition, it has a desirable blend of both high and low frequency characteristics and therefore is capable of preserving the Hurst coefficient, h. The generated annual flows are disaggregated into monthly sequences using a modification of the Valencia-Schaake model. The low-flow frequency and flow duration characteristics of the generated monthly flows, with length equal to the historical data, compare favorably with the historical data. Once the models were verified, 100-year sequences were generated and analyzed for their low flow characteristics. One-, three- and six- month low-flow frequencies at recurrence intervals greater than 10 years are generally found to be lower than flow computed from the historical flows. A method is proposed for synthesizing flows at ungaged sites. (Kosco-USGS)

  9. On generation and evolution of seaward propagating internal solitary waves in the northwestern South China Sea

    NASA Astrophysics Data System (ADS)

    Xu, Jiexin; Chen, Zhiwu; Xie, Jieshuo; Cai, Shuqun

    2016-03-01

    In this paper, the generation and evolution of seaward propagating internal solitary waves (ISWs) detected by satellite image in the northwestern South China Sea (SCS) are investigated by a fully nonlinear, non-hydrostatic, three-dimensional Massachusetts Institute of Technology general circulation model (MITgcm). The three-dimensional (3D) modeled ISWs agree favorably with those by satellite image, indicating that the observed seaward propagating ISWs may be generated by the interaction of barotropic tidal flow with the arc-like continental slope south of Hainan Island. Though the tidal current is basically in east-west direction, different types of internal waves are generated by tidal currents flowing over the slopes with different shaped shorelines. Over the slope where the shoreline is straight, only weak internal tides are generated; over the slope where the shoreline is seaward concave, large-amplitude internal bores are generated, and since the concave isobaths of the arc-like continental slope tend to focus the baroclinic tidal energy which is conveyed to the internal bores, the internal bores can efficiently disintegrate into a train of rank-ordered ISWs during their propagation away from the slope; while over the slope where the shoreline is seaward convex, no distinct internal tides are generated. It is also implied that the internal waves over the slope are generated due to mixed lee wave mechanism. Furthermore, the effects of 3D model, continental slope curvature, stratification, rotation and tidal forcing on the generation of ISWs are discussed, respectively. It is shown that, the amplitude and phase speed of ISWs derived from a two-dimensional (2D) model are smaller than those from the 3D one, and the 3D model has an advantage over 2D one in simulating the ISWs generated by the interaction between tidal currents and 3D curved continental slope; the reduced continental slope curvature hinders the extension of ISW crestline; both weaker stratification and rotation suppress the generation of ISWs; and the width of ISW crestline generated by K1 tidal harmonic is longer than that by M2 tidal harmonic.

  10. Complex networks generated by the Penna bit-string model: Emergence of small-world and assortative mixing

    NASA Astrophysics Data System (ADS)

    Li, Chunguang; Maini, Philip K.

    2005-10-01

    The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.

  11. Modeling of hydrogen/deuterium dynamics and heat generation on palladium nanoparticles for hydrogen storage and solid-state nuclear fusion.

    PubMed

    Tanabe, Katsuaki

    2016-01-01

    We modeled the dynamics of hydrogen and deuterium adsorbed on palladium nanoparticles including the heat generation induced by the chemical adsorption and desorption, as well as palladium-catalyzed reactions. Our calculations based on the proposed model reproduce the experimental time-evolution of pressure and temperature with a single set of fitting parameters for hydrogen and deuterium injection. The model we generated with a highly generalized set of formulations can be applied for any combination of a gas species and a catalytic adsorbent/absorbent. Our model can be used as a basis for future research into hydrogen storage and solid-state nuclear fusion technologies.

  12. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  13. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    PubMed

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Numerical Simulations of Vortex Generator Vanes and Jets on a Flat Plate

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Yao, Chung-Sheng; Lin, John C.

    2002-01-01

    Numerical simulations of a single low-profile vortex generator vane, which is only a small fraction of the boundary-layer thickness, and a vortex generating jet have been performed for flows over a flat plate. The numerical simulations were computed by solving the steady-state solution to the Reynolds-averaged Navier-Stokes equations. The vortex generating vane results were evaluated by comparing the strength and trajectory of the streamwise vortex to experimental particle image velocimetry measurements. From the numerical simulations of the vane case, it was observed that the Shear-Stress Transport (SST) turbulence model resulted in a better prediction of the streamwise peak vorticity and trajectory when compared to the Spalart-Allmaras (SA) turbulence model. It is shown in this investigation that the estimation of the turbulent eddy viscosity near the vortex core, for both the vane and jet simulations, was higher for the SA model when compared to the SST model. Even though the numerical simulations of the vortex generating vane were able to predict the trajectory of the stream-wise vortex, the initial magnitude and decay of the peak streamwise vorticity were significantly under predicted. A comparison of the positive circulation associated with the streamwise vortex showed that while the numerical simulations produced a more diffused vortex, the vortex strength compared very well to the experimental observations. A grid resolution study for the vortex generating vane was also performed showing that the diffusion of the vortex was not a result of insufficient grid resolution. Comparisons were also made between a fully modeled trapezoidal vane with finite thickness to a simply modeled rectangular thin vane. The comparisons showed that the simply modeled rectangular vane produced a streamwise vortex which had a strength and trajectory very similar to the fully modeled trapezoidal vane.

  16. The TimeGeo modeling framework for urban mobility without travel surveys

    PubMed Central

    Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C.

    2016-01-01

    Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys. PMID:27573826

  17. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  18. The TimeGeo modeling framework for urban motility without travel surveys.

    PubMed

    Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C

    2016-09-13

    Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys.

  19. Progress on Implementing Additional Physics Schemes into MPAS-A v5.1 for Next Generation Air Quality Modeling

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) has a team of scientists developing a next generation air quality modeling system employing the Model for Prediction Across Scales – Atmosphere (MPAS-A) as its meteorological foundation. Several preferred physics schemes and ...

  20. Approaches to the automatic generation and control of finite element meshes

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  1. Field-circuit analysis and measurements of a single-phase self-excited induction generator

    NASA Astrophysics Data System (ADS)

    Makowski, Krzysztof; Leicht, Aleksander

    2017-12-01

    The paper deals with a single-phase induction machine operating as a stand-alone self-excited single-phase induction generator for generation of electrical energy from renewable energy sources. By changing number of turns and size of wires in the auxiliary stator winding, an improvement of performance characteristics of the generator were obtained as regards no-load and load voltage of the stator windings as well as stator winding currents of the generator. Field-circuit simulation models of the generator were developed using Flux2D software package for the generator with shunt capacitor in the main stator winding. The obtained results have been validated experimentally at the laboratory setup using the single-phase capacitor induction motor of 1.1 kW rated power and 230 V voltage as a base model of the generator.

  2. Antiresonance induced spin-polarized current generation

    NASA Astrophysics Data System (ADS)

    Yin, Sun; Min, Wen-Jing; Gao, Kun; Xie, Shi-Jie; Liu, De-Sheng

    2011-12-01

    According to the one-dimensional antiresonance effect (Wang X R, Wang Y and Sun Z Z 2003 Phys. Rev. B 65 193402), we propose a possible spin-polarized current generation device. Our proposed model consists of one chain and an impurity coupling to the chain. The energy level of the impurity can be occupied by an electron with a specific spin, and the electron with such a spin is blocked because of the antiresonance effect. Based on this phenomenon our model can generate the spin-polarized current flowing through the chain due to different polarization rates. On the other hand, the device can also be used to measure the generated spin accumulation. Our model is feasible with today's technology.

  3. Ground Reaction Forces Generated During Rhythmical Squats as a Dynamic Loads of the Structure

    NASA Astrophysics Data System (ADS)

    Pantak, Marek

    2017-10-01

    Dynamic forces generated by moving persons can lead to excessive vibration of the long span, slender and lightweight structure such as floors, stairs, stadium stands and footbridges. These dynamic forces are generated during walking, running, jumping and rhythmical body swaying in vertical or horizontal direction etc. In the paper the mathematical models of the Ground Reaction Forces (GRFs) generated during squats have been presented. Elaborated models was compared to the GRFs measured during laboratory tests carried out by author in wide range of frequency using force platform. Moreover, the GRFs models were evaluated during dynamic numerical analyses and dynamic field tests of the exemplary structure (steel footbridge).

  4. Multi-agent simulation of generation expansion in electricity markets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botterud, A; Mahalik, M. R.; Veselka, T. D.

    2007-06-01

    We present a new multi-agent model of generation expansion in electricity markets. The model simulates generation investment decisions of decentralized generating companies (GenCos) interacting in a complex, multidimensional environment. A probabilistic dispatch algorithm calculates prices and profits for new candidate units in different future states of the system. Uncertainties in future load, hydropower conditions, and competitors actions are represented in a scenario tree, and decision analysis is used to identify the optimal expansion decision for each individual GenCo. We test the model using real data for the Korea power system under different assumptions about market design, market concentration, and GenCo'smore » assumed expectations about their competitors investment decisions.« less

  5. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  6. [An ADAA model and its analysis method for agronomic traits based on the double-cross mating design].

    PubMed

    Xu, Z C; Zhu, J

    2000-01-01

    According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.

  7. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  8. Lattice Boltzmann approach for hydro-acoustic waves generated by tsunamigenic sea bottom displacement

    NASA Astrophysics Data System (ADS)

    Prestininzi, P.; Abdolali, A.; Montessori, A.; Kirby, J. T.; La Rocca, Michele

    2016-11-01

    Tsunami waves are generated by sea bottom failures, landslides and faults. The concurrent generation of hydro-acoustic waves (HAW), which travel much faster than the tsunami, has received much attention, motivated by their possible exploitation as precursors of tsunamis. This feature makes the detection of HAW particularly well-suited for building an early-warning system. Accuracy and efficiency of the modeling approaches for HAW thus play a pivotal role in the design of such systems. Here, we present a Lattice Boltzmann Method (LBM) for the generation and propagation of HAW resulting from tsunamigenic ground motions and verify it against commonly employed modeling solutions. LBM is well known for providing fast and accurate solutions to both hydrodynamics and acoustics problems, thus it naturally becomes a candidate as a comprehensive computational tool for modeling generation and propagation of HAW.

  9. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  10. Flapping wing applied to wind generators

    NASA Astrophysics Data System (ADS)

    Colidiuc, Alexandra; Galetuse, Stelian; Suatean, Bogdan

    2012-11-01

    The new conditions at the international level for energy source distributions and the continuous increasing of energy consumption must lead to a new alternative resource with the condition of keeping the environment clean. This paper offers a new approach for a wind generator and is based on the theoretical aerodynamic model. This new model of wind generator helped me to test what influences would be if there will be a bird airfoil instead of a normal wind generator airfoil. The aim is to calculate the efficiency for the new model of wind generator. A representative direction for using the renewable energy is referred to the transformation of wind energy into electrical energy, with the help of wind turbines; the development of such systems lead to new solutions based on high efficiency, reduced costs and suitable to the implementation conditions.

  11. Phenomenological model of photoluminescence degradation and photoinduced defect formation in silicon nanocrystal ensembles under singlet oxygen generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongalsky, Maxim B., E-mail: mgongalsky@gmail.com; Timoshenko, Victor Yu.

    2014-12-28

    We propose a phenomenological model to explain photoluminescence degradation of silicon nanocrystals under singlet oxygen generation in gaseous and liquid systems. The model considers coupled rate equations, which take into account the exciton radiative recombination in silicon nanocrystals, photosensitization of singlet oxygen generation, defect formation on the surface of silicon nanocrystals as well as quenching processes for both excitons and singlet oxygen molecules. The model describes well the experimentally observed power law dependences of the photoluminescence intensity, singlet oxygen concentration, and lifetime versus photoexcitation time. The defect concentration in silicon nanocrystals increases by power law with a fractional exponent, whichmore » depends on the singlet oxygen concentration and ambient conditions. The obtained results are discussed in a view of optimization of the photosensitized singlet oxygen generation for biomedical applications.« less

  12. Identification of influencing municipal characteristics regarding household waste generation and their forecasting ability in Biscay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oribe-Garcia, Iraia, E-mail: iraia.oribe@deusto.es; Kamara-Esteban, Oihane; Martin, Cristina

    Highlights: • We have modelled household waste generation in Biscay municipalities. • We have identified relevant characteristics regarding household waste generation. • Factor models are used in order to identify the best subset of explicative variables. • Biscay’s municipalities are grouped by means of hierarchical clustering. - Abstract: The planning of waste management strategies needs tools to support decisions at all stages of the process. Accurate quantification of the waste to be generated is essential for both the daily management (short-term) and proper design of facilities (long-term). Designing without rigorous knowledge may have serious economic and environmental consequences. The presentmore » works aims at identifying relevant socio-economic features of municipalities regarding Household Waste (HW) generation by means of factor models. Factor models face two main drawbacks, data collection and identifying relevant explanatory variables within a heterogeneous group. Grouping similar characteristics observations within a group may favour the deduction of more robust models. The methodology followed has been tested with Biscay Province because it stands out for having very different municipalities ranging from very rural to urban ones. Two main models are developed, one for the overall province and a second one after clustering the municipalities. The results prove that relating municipalities with specific characteristics, improves the results in a very heterogeneous situation. The methodology has identified urban morphology, tourism activity, level of education and economic situation as the most influencing characteristics in HW generation.« less

  13. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  14. Stochastic Modeling based on Dictionary Approach for the Generation of Daily Precipitation Occurrences

    NASA Astrophysics Data System (ADS)

    Panu, U. S.; Ng, W.; Rasmussen, P. F.

    2009-12-01

    The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in data through the concepts of autocorrelation, average mutual information, and Hurst exponent, (ii) the ability of models to preserve the persistency within the homogenous dry/wet weather states through analysis of dry/wet-spell lengths between the observed and generated data, and (iii) the ability to assesses the goodness-of-fit of models through the likelihood estimates (i.e., AIC and BIC). Past 30 years of observed daily precipitation records from 10 Canadian meteorological stations were utilized for comparative analyses of the three models. In general, the Markov chain model performed well. The remainders of the models were found to be competitive from one another depending upon the scope and purpose of the comparison. Although the Markov chain model has a certain advantage in the generation of daily precipitation occurrences, the structural flexibility offered by the Dictionary approach in modeling the varied segment lengths of heterogeneous weather states provides a distinct and powerful advantage in the generation of precipitation sequences.

  15. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  16. An advanced stochastic weather generator for simulating 2-D high-resolution climate variables

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2017-07-01

    A new stochastic weather generator, Advanced WEather GENerator for a two-dimensional grid (AWE-GEN-2d) is presented. The model combines physical and stochastic approaches to simulate key meteorological variables at high spatial and temporal resolution: 2 km × 2 km and 5 min for precipitation and cloud cover and 100 m × 100 m and 1 h for near-surface air temperature, solar radiation, vapor pressure, atmospheric pressure, and near-surface wind. The model requires spatially distributed data for the calibration process, which can nowadays be obtained by remote sensing devices (weather radar and satellites), reanalysis data sets and ground stations. AWE-GEN-2d is parsimonious in terms of computational demand and therefore is particularly suitable for studies where exploring internal climatic variability at multiple spatial and temporal scales is fundamental. Applications of the model include models of environmental systems, such as hydrological and geomorphological models, where high-resolution spatial and temporal meteorological forcing is crucial. The weather generator was calibrated and validated for the Engelberg region, an area with complex topography in the Swiss Alps. Model test shows that the climate variables are generated by AWE-GEN-2d with a level of accuracy that is sufficient for many practical applications.

  17. Modeling and simulation of pressure waves generated by nano-thermite reactions

    NASA Astrophysics Data System (ADS)

    Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; (Yuki) Horie, Yasuyuki

    2012-11-01

    This paper reports the modeling of pressure waves from the explosive reaction of nano-thermites consisting of mixtures of nanosized aluminum and oxidizer granules. Such nanostructured thermites have higher energy density (up to 26 kJ/cm3) and can generate a transient pressure pulse four times larger than that from trinitrotoluene (TNT) based on volume equivalence. A plausible explanation for the high pressure generation is that the reaction times are much shorter than the time for a shock wave to propagate away from the reagents region so that all the reaction energy is dumped into the gaseous products almost instantaneously and thereby a strong shock wave is generated. The goal of the modeling is to characterize the gas dynamic behavior for thermite reactions in a cylindrical reaction chamber and to model the experimentally measured pressure histories. To simplify the details of the initial stage of the explosive reaction, it is assumed that the reaction generates a one dimensional shock wave into an air-filled cylinder and propagates down the tube in a self-similar mode. Experimental data for Al/Bi2O3 mixtures were used to validate the model with attention focused on the ratio of specific heats and the drag coefficient. Model predictions are in good agreement with the measured pressure histories.

  18. Dynamic Models Applied to Landslides: Study Case Angangueo, MICHOACÁN, MÉXICO.

    NASA Astrophysics Data System (ADS)

    Torres Fernandez, L.; Hernández Madrigal, V. M., , Dr; Capra, L.; Domínguez Mota, F. J., , Dr

    2017-12-01

    Most existing models for landslide zonification are static type, do not consider the dynamic behavior of the trigger factor. This results in a limited representation of the actual zonation of slope instability, present a short-term validity, cańt be applied for the design of early warning systems, etc. Particularly in Mexico, these models are static because they do not consider triggering factor such as precipitation. In this work, we present a numerical evaluation to know the landslide susceptibility, based on probabilistic methods. Which are based on the generation of time series, which are generated from the meteorological stations, having limited information an interpolation is made to generate the simulation of the precipitation in the zone. The obtained information is integrated in PCRaster and in conjunction with the conditioning factors it is possible to generate a dynamic model. This model will be applied for landslide zoning in the municipality of Angangueo, characterized by frequent logging of debris and mud flow, translational and rotational landslides, detonated by atypical precipitations, such as those recorded in 2010. These caused economic losses and humans. With these models, it would be possible to generate probable scenarios that help the Angangueo's population to reduce the risks and to carry out actions of constant resilience activities.

  19. A review of mechanisms and modelling procedures for landslide tsunamis

    NASA Astrophysics Data System (ADS)

    Løvholt, Finn; Harbitz, Carl B.; Glimsdal, Sylfest

    2017-04-01

    Landslides, including volcano flank collapses or volcanically induced flows, constitute the second-most important cause of tsunamis after earthquakes. Compared to earthquakes, landslides are more diverse with respect to how they generation tsunamis. Here, we give an overview over the main tsunami generation mechanisms for landslide tsunamis. In the presentation, a mix of results using analytical models, numerical models, laboratory experiments, and case studies are used to illustrate the diversity, but also to point out some common characteristics. Different numerical modelling techniques for the landslide evolution, and the tsunami generation and propagation, as well as the effect of frequency dispersion, are also briefly discussed. Basic tsunami generation mechanisms for different types of landslides, including large submarine translational landslide, to impulsive submarine slumps, and violent subaerial landslides and volcano flank collapses, are reviewed. The importance of the landslide kinematics is given attention, including the interplay between landslide acceleration, landslide velocity to depth ratio (Froude number) and dimensions. Using numerical simulations, we demonstrate how landslide deformation and retrogressive failure development influence tsunamigenesis. Generation mechanisms for subaerial landslides, are reviewed by means of scaling relations from laboratory experiments and numerical modelling. Finally, it is demonstrated how the different degree of complexity in the landslide tsunamigenesis needs to be reflected by increased sophistication in numerical models.

  20. Large-scale building scenes reconstruction from close-range images based on line and plane feature

    NASA Astrophysics Data System (ADS)

    Ding, Yi; Zhang, Jianqing

    2007-11-01

    Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.

  1. A Protocol for Generating and Exchanging (Genome-Scale) Metabolic Resource Allocation Models.

    PubMed

    Reimers, Alexandra-M; Lindhorst, Henning; Waldherr, Steffen

    2017-09-06

    In this article, we present a protocol for generating a complete (genome-scale) metabolic resource allocation model, as well as a proposal for how to represent such models in the systems biology markup language (SBML). Such models are used to investigate enzyme levels and achievable growth rates in large-scale metabolic networks. Although the idea of metabolic resource allocation studies has been present in the field of systems biology for some years, no guidelines for generating such a model have been published up to now. This paper presents step-by-step instructions for building a (dynamic) resource allocation model, starting with prerequisites such as a genome-scale metabolic reconstruction, through building protein and noncatalytic biomass synthesis reactions and assigning turnover rates for each reaction. In addition, we explain how one can use SBML level 3 in combination with the flux balance constraints and our resource allocation modeling annotation to represent such models.

  2. Multilingual Generalization of the ModelCreator Software for Math Item Generation. Research Report. ETS RR-05-02

    ERIC Educational Resources Information Center

    Higgins, Derrick; Futagi, Yoko; Deane, Paul

    2005-01-01

    This paper reports on the process of modifying the ModelCreator item generation system to produce output in multiple languages. In particular, Japanese and Spanish are now supported in addition to English. The addition of multilingual functionality was considerably facilitated by the general formulation of our natural language generation system,…

  3. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Treesearch

    Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...

  4. An Initial Model for Generative Design Research: Bringing Together Generative Focus Group (GFG) and Experience Reflection Modelling (ERM)

    ERIC Educational Resources Information Center

    Bakirlioglu, Yekta; Ogur, Dilruba; Dogan, Cagla; Turhan, Senem

    2016-01-01

    Understanding people's experiences and the context of use of a product at the earliest stages of the design process has in the last decade become an important aspect of both the design profession and design education. Generative design research helps designers understand user experiences, while also throwing light on their current needs,…

  5. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualini, Donatella

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less

  6. An ontology model for nursing narratives with natural language generation technology.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung

    2013-01-01

    The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.

  7. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.

    2013-03-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  8. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  9. Analysis of response to 20 generations of selection for body composition in mice: fit to infinitesimal model assumptions

    PubMed Central

    Martinez, Victor; Bünger, Lutz; Hill, William G

    2000-01-01

    Data were analysed from a divergent selection experiment for an indicator of body composition in the mouse, the ratio of gonadal fat pad to body weight (GFPR). Lines were selected for 20 generations for fat (F), lean (L) or were unselected (C), with three replicates of each. Selection was within full-sib families, 16 families per replicate for the first seven generations, eight subsequently. At generation 20, GFPR in the F lines was twice and in the L lines half that of C. A log transformation removed both asymmetry of response and heterogeneity of variance among lines, and so was used throughout. Estimates of genetic variance and heritability (approximately 50%) obtained using REML with an animal model were very similar, whether estimated from the first few generations of selection, or from all 20 generations, or from late generations having fitted pedigree. The estimates were also similar when estimated from selected or control lines. Estimates from REML also agreed with estimates of realised heritability. The results all accord with expectations under the infinitesimal model, despite the four-fold changes in mean. Relaxed selection lines, derived from generation 20, showed little regression in fatness after 40 generations without selection. PMID:14736404

  10. Chronic contamination decreases disease spread: a Daphnia–fungus–copper case study

    PubMed Central

    Civitello, David J.; Forys, Philip; Johnson, Adam P.; Hall, Spencer R.

    2012-01-01

    Chemical contamination and disease outbreaks have increased in many ecosystems. However, connecting pollution to disease spread remains difficult, in part, because contaminants can simultaneously exert direct and multi-generational effects on several host and parasite traits. To address these challenges, we parametrized a model using a zooplankton–fungus–copper system. In individual-level assays, we considered three sublethal contamination scenarios: no contamination, single-generation contamination (hosts and parasites exposed only during the assays) and multi-generational contamination (hosts and parasites exposed for several generations prior to and during the assays). Contamination boosted transmission by increasing contact of hosts with parasites. However, it diminished parasite reproduction by reducing the size and lifespan of infected hosts. Multi-generational contamination further reduced parasite reproduction. The parametrized model predicted that a single generation of contamination would enhance disease spread (via enhanced transmission), whereas multi-generational contamination would inhibit epidemics relative to unpolluted conditions (through greatly depressed parasite reproduction). In a population-level experiment, multi-generational contamination reduced the size of experimental epidemics but did not affect Daphnia populations without disease. This result highlights the importance of multi-generational effects for disease dynamics. Such integration of models with experiments can provide predictive power for disease problems in contaminated environments. PMID:22593104

  11. Two novel ALK mutations mediate acquired resistance to the next generation ALK inhibitor alectinib

    PubMed Central

    Katayama, Ryohei; Friboulet, Luc; Koike, Sumie; Lockerman, Elizabeth L.; Khan, Tahsin M.; Gainor, Justin F.; Iafrate, A. John; Takeuchi, Kengo; Taiji, Makoto; Okuno, Yasushi; Fujita, Naoya; Engelman, Jeffrey A.; Shaw, Alice T.

    2014-01-01

    Purpose The first-generation ALK tyrosine kinase inhibitor (TKI) crizotinib is a standard therapy for patients with ALK-rearranged NSCLC. Several next-generation ALK-TKIs have entered the clinic and have shown promising activity in crizotinib-resistant patients. As patients still relapse even on these next-generation ALK-TKIs, we examined mechanisms of resistance to the next-generation ALK-TKI alectinib and potential strategies to overcome this resistance. Experimental Design We established a cell line model of alectinib resistance, and analyzed a resistant tumor specimen from a patient who had relapsed on alectinib. We developed Ba/F3 models harboring alectinib-resistant ALK mutations and evaluated the potency of other next-generation ALK-TKIs in these models. We tested the antitumor activity of the next-generation ALK-TKI ceritinib in the patient with acquired resistance to alectinib. To elucidate structure-activity-relationships of ALK mutations, we performed computational thermodynamic simulation with MP-CAFEE. Results We identified a novel V1180L gatekeeper mutation from the cell line model and a second novel I1171T mutation from the patient who developed resistance to alectinib. Both ALK mutations conferred resistance to alectinib as well as to crizotinib, but were sensitive to ceritinib and other next-generation ALK-TKIs. Treatment of the patient with ceritinib led to a marked response. Thermodynamics simulation suggests that both mutations lead to distinct structural alterations that decrease the binding affinity with alectinib. Conclusions We have identified two novel ALK mutations arising after alectinib exposure which are sensitive to other next generation ALK-TKIs. The ability of ceritinib to overcome alectinib-resistance mutations suggests a potential role for sequential therapy with multiple next-generation ALK-TKIs. PMID:25228534

  12. Two novel ALK mutations mediate acquired resistance to the next-generation ALK inhibitor alectinib.

    PubMed

    Katayama, Ryohei; Friboulet, Luc; Koike, Sumie; Lockerman, Elizabeth L; Khan, Tahsin M; Gainor, Justin F; Iafrate, A John; Takeuchi, Kengo; Taiji, Makoto; Okuno, Yasushi; Fujita, Naoya; Engelman, Jeffrey A; Shaw, Alice T

    2014-11-15

    The first-generation ALK tyrosine kinase inhibitor (TKI) crizotinib is a standard therapy for patients with ALK-rearranged non-small cell lung cancer (NSCLC). Several next-generation ALK-TKIs have entered the clinic and have shown promising activity in crizotinib-resistant patients. As patients still relapse even on these next-generation ALK-TKIs, we examined mechanisms of resistance to the next-generation ALK-TKI alectinib and potential strategies to overcome this resistance. We established a cell line model of alectinib resistance, and analyzed a resistant tumor specimen from a patient who had relapsed on alectinib. We developed Ba/F3 models harboring alectinib-resistant ALK mutations and evaluated the potency of other next-generation ALK-TKIs in these models. We tested the antitumor activity of the next-generation ALK-TKI ceritinib in the patient with acquired resistance to alectinib. To elucidate structure-activity relationships of ALK mutations, we performed computational thermodynamic simulation with MP-CAFEE. We identified a novel V1180L gatekeeper mutation from the cell line model and a second novel I1171T mutation from the patient who developed resistance to alectinib. Both ALK mutations conferred resistance to alectinib as well as to crizotinib, but were sensitive to ceritinib and other next-generation ALK-TKIs. Treatment of the patient with ceritinib led to a marked response. Thermodynamics simulation suggests that both mutations lead to distinct structural alterations that decrease the binding affinity with alectinib. We have identified two novel ALK mutations arising after alectinib exposure that are sensitive to other next-generation ALK-TKIs. The ability of ceritinib to overcome alectinib-resistance mutations suggests a potential role for sequential therapy with multiple next-generation ALK-TKIs. ©2014 American Association for Cancer Research.

  13. Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model.

    PubMed

    Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J

    2006-11-01

    The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.

  14. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  15. The viscous lee wave problem and its implications for ocean modelling

    NASA Astrophysics Data System (ADS)

    Shakespeare, Callum J.; Hogg, Andrew McC.

    2017-05-01

    Ocean circulation models employ 'turbulent' viscosity and diffusivity to represent unresolved sub-gridscale processes such as breaking internal waves. Computational power has now advanced sufficiently to permit regional ocean circulation models to be run at sufficiently high (100 m-1 km) horizontal resolution to resolve a significant part of the internal wave spectrum. Here we develop theory for boundary generated internal waves in such models, and in particular, where the waves dissipate their energy. We focus specifically on the steady lee wave problem where stationary waves are generated by a large-scale flow acting across ocean bottom topography. We generalise the energy flux expressions of [Bell, T., 1975. Topographically generated internal waves in the open ocean. J. Geophys. Res. 80, 320-327] to include the effect of arbitrary viscosity and diffusivity. Applying these results for realistic parameter choices we show that in the present generation of models with O(1) m2s-1 horizontal viscosity/diffusivity boundary-generated waves will inevitably dissipate the majority of their energy within a few hundred metres of the boundary. This dissipation is a direct consequence of the artificially high viscosity/diffusivity, which is not always physically justified in numerical models. Hence, caution is necessary in comparing model results to ocean observations. Our theory further predicts that O(10-2) m2s-1 horizontal and O(10-4) m2s-1 vertical viscosity/diffusivity is required to achieve a qualitatively inviscid representation of internal wave dynamics in ocean models.

  16. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  17. Generating human-like movements on an anthropomorphic robot using an interior point method

    NASA Astrophysics Data System (ADS)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  18. Burial history, thermal history and hydrocarbon generation modelling of the Jurassic source rocks in the basement of the Polish Carpathian Foredeep and Outer Carpathians (SE Poland)

    NASA Astrophysics Data System (ADS)

    Kosakowski, Paweł; Wróbel, Magdalena

    2012-08-01

    Burial history, thermal maturity, and timing of hydrocarbon generation were modelled for the Jurassic source rocks in the basement of the Carpathian Foredeep and marginal part of the Outer Carpathians. The area of investigation was bounded to the west by Kraków, to the east by Rzeszów. The modelling was carried out in profiles of wells: Będzienica 2, Dębica 10K, Góra Ropczycka 1K, Goleszów 5, Nawsie 1, Pławowice E1 and Pilzno 40. The organic matter, containing gas-prone Type III kerogen with an admixture of Type II kerogen, is immature or at most, early mature to 0.7 % in the vitrinite reflectance scale. The highest thermal maturity is recorded in the south-eastern part of the study area, where the Jurassic strata are buried deeper. The thermal modelling showed that the obtained organic matter maturity in the initial phase of the "oil window" is connected with the stage of the Carpathian overthrusting. The numerical modelling indicated that the onset of hydrocarbon generation from the Middle Jurassic source rocks was also connected with the Carpathian thrust belt. The peak of hydrocarbon generation took place in the orogenic stage of the overthrusting. The amount of generated hydrocarbons is generally small, which is a consequence of the low maturity and low transformation degree of kerogen. The generated hydrocarbons were not expelled from their source rock. An analysis of maturity distribution and transformation degree of the Jurassic organic matter shows that the best conditions for hydrocarbon generation occurred most probably in areas deeply buried under the Outer Carpathians. It is most probable that the "generation kitchen" should be searched for there.

  19. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    PubMed

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Need to Knowledge (NtK) Model: an evidence-based framework for generating technological innovations with socio-economic impacts.

    PubMed

    Flagg, Jennifer L; Lane, Joseph P; Lockett, Michelle M

    2013-02-15

    Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace.The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts.

  1. Need to Knowledge (NtK) Model: an evidence-based framework for generating technological innovations with socio-economic impacts

    PubMed Central

    2013-01-01

    Background Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. Discussion The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace. The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. Summary The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts. PMID:23414369

  2. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  3. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE PAGES

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.; ...

    2017-04-07

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  4. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  5. The flavor-locked flavorful two Higgs doublet model

    NASA Astrophysics Data System (ADS)

    Altmannshofer, Wolfgang; Gori, Stefania; Robinson, Dean J.; Tuckler, Douglas

    2018-03-01

    We propose a new framework to generate the Standard Model (SM) quark flavor hierarchies in the context of two Higgs doublet models (2HDM). The `flavorful' 2HDM couples the SM-like Higgs doublet exclusively to the third quark generation, while the first two generations couple exclusively to an additional source of electroweak symmetry breaking, potentially generating striking collider signatures. We synthesize the flavorful 2HDM with the `flavor-locking' mechanism, that dynamically generates large quark mass hierarchies through a flavor-blind portal to distinct flavon and hierarchon sectors: dynamical alignment of the flavons allows a unique hierarchon to control the respective quark masses. We further develop the theoretical construction of this mechanism, and show that in the context of a flavorful 2HDM-type setup, it can automatically achieve realistic flavor structures: the CKM matrix is automatically hierarchical with | V cb | and | V ub | generically of the observed size. Exotic contributions to meson oscillation observables may also be generated, that may accommodate current data mildly better than the SM itself.

  6. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  7. Search for Contact Interactions in Dilepton Final State in the CMS Experiment: Generator-Level Studies

    NASA Astrophysics Data System (ADS)

    Zaleski, Shawn

    2017-01-01

    A set of contact interaction (CI) Monte Carlo events, for which Standard Model Drell-Yan events are background, are generated using a leading-order parton-shower generator, Pythia8. We consider three isoscalar models with three different helicity structures, left-left (LL), left-right/right-left (LR), and right­right (RR), each with destructive and constructive interference. For each of these models, 150,000 events are generated for analysis of CI interactions in the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) with a centre of mass energy of 13 TeV. This study is a generator level study, and detector effects are accounted for by application of kinematic cuts on the generator-level quantities rather than application of a detailed detector simulation package (e.g. GEANT). Distributions of dilepton invariant mass, Collins-Soper angle, and the forward-backward asymmetry are compared with those arising from pure Drell-Yan events.

  8. Complexes formed between DNA and poly(amido amine) dendrimers of different generations--modelling DNA wrapping and penetration.

    PubMed

    Qamhieh, Khawla; Nylander, Tommy; Black, Camilla F; Attard, George S; Dias, Rita S; Ainalem, Marie-Louise

    2014-07-14

    This study deals with the build-up of biomaterials consisting of biopolymers, namely DNA, and soft particles, poly(amido amine) (PAMAM) dendrimers, and how to model their interactions. We adopted and applied an analytical model to provide further insight into the complexation between DNA (4331 bp) and positively charged PAMAM dendrimers of generations 1, 2, 4, 6 and 8, previously studied experimentally. The theoretical models applied describe the DNA as a semiflexible polyelectrolyte that interacts with dendrimers considered as either hard (impenetrable) spheres or as penetrable and soft spheres. We found that the number of DNA turns around one dendrimer, thus forming a complex, increases with the dendrimer size or generation. The DNA penetration required for the complex to become charge neutral depends on dendrimer generation, where lower generation dendrimers require little penetration to give charge neutral complexes. High generation dendrimers display charge inversion for all considered dendrimer sizes and degrees of penetration. Consistent with the morphologies observed experimentally for dendrimer/DNA aggregates, where highly ordered rods and toroids are found for low generation dendrimers, the DNA wraps less than one turn around the dendrimer. Disordered globular structures appear for high generation dendrimers, where the DNA wraps several turns around the dendrimer. Particularly noteworthy is that the dendrimer generation 4 complexes, where the DNA wraps about one turn around the dendrimers, are borderline cases and can form all types of morphologies. The net-charges of the aggregate have been estimated using zeta potential measurements and are discussed within the theoretical framework.

  9. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  10. Radiometric Block Adjusment and Digital Radiometric Model Generation

    NASA Astrophysics Data System (ADS)

    Pros, A.; Colomina, I.; Navarro, J. A.; Antequera, R.; Andrinal, P.

    2013-05-01

    In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM) as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF). In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART) models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.

  11. SU-F-T-447: The Impact of Treatment Planning Methods On RapidPlan Modeling for Rectum Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S; Peng, J; Li, K

    2016-06-15

    Purpose: To investigate the dose volume histogram (DVH) prediction varieties based on intensity modulate radiotherapy (IMRT) plan or volume arc modulate radiotherapy (VMAT) plan models on the RapidPlan. Methods: Two DVH prediction models were generated in this study, including an IMRT model trained from 83 IMRT rectum plans and a VMAT model trained from 60 VMAT rectum plans. In the internal validation, 20 plans from each training database were selected to verify the clinical feasibility of the model. Then, 10 IMRT plans (PIMRT-by-IMRT-model) generated from IMRT model and 10 IMRT plans generated from VMAT model (PIMRT-by-VMAT-model) were compared on themore » dose to organs at risk (OAR), which included bladder, left and right femoral heads. The similar comparison was also performed on the VMAT plans generated from IMRT model (PVMAT-by-IMRT-model) and VMAT plans generated from VMAT (PVMAT-by-VMAT-model) model. Results: For the internal validation, all plans from IMRT or VMAT model shows significantly improvement on OAR sparing compared with the corresponded clinical ones. Compared to the PIMRT-by-VMAT-model, the PIMRT-by-IMRT-model has a reduction of 6.90±3.87%(p<0.001) on V40 6.63±3.62%(p<0.001) on V45 and 4.74±2.26%(p<0.001) on V50 in bladder; and a mean dose reduction of 2.12±1.75Gy(p=0.004) and 2.84±1.53Gy(p<0.001) in right and left femoral head, respectively. There was no significant difference on OAR sparing between PVMAT-by-IMRT-model and PVMAT-by-VMAT-model. Conclusion: The IMRT model for the rectal cancer in the RapidPlan can be applied to for VMAT planning. However, the VMAT model is not suggested to use in the IMRT planning. Cautions should be taken that the planning model based on some technique may not feasible to other planning techniques.« less

  12. Computer-Generated Feedback on Student Writing

    ERIC Educational Resources Information Center

    Ware, Paige

    2011-01-01

    A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…

  13. The wandering self: Tracking distracting self-generated thought in a cognitively demanding context.

    PubMed

    Huijser, Stefan; van Vugt, Marieke K; Taatgen, Niels A

    2018-02-01

    We investigated how self-referential processing (SRP) affected self-generated thought in a complex working memory task (CWM) to test the predictions of a computational cognitive model. This model described self-generated thought as resulting from competition between task- and distracting processes, and predicted that self-generated thought interferes with rehearsal, reducing memory performance. SRP was hypothesized to influence this goal competition process by encouraging distracting self-generated thinking. We used a spatial CWM task to examine if SRP instigated such thoughts, and employed eye-tracking to examine rehearsal interference in eye-movement and self-generated thinking in pupil size. The results showed that SRP was associated with lower performance and higher rates of self-generated thought. Self-generated thought was associated with less rehearsal and we observed a smaller pupil size for mind wandering. We conclude that SRP can instigate self-generated thought and that goal competition provides a likely explanation for how self-generated thoughts arises in a demanding task. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Integrated surface-subsurface model to investigate the role of groundwater in headwater catchment runoff generation: A minimalist approach to parameterisation

    NASA Astrophysics Data System (ADS)

    Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe

    2017-04-01

    Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.

  15. Semantic attributes based texture generation

    NASA Astrophysics Data System (ADS)

    Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa

    2018-04-01

    Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.

  16. WE-D-303-02: Applications of Volumetric Images Generated with a Respiratory Motion Model Based On An External Surrogate Signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurwitz, M; Williams, C; Dhou, S

    Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion ofmore » the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206.« less

  17. Limits and Economic Effects of Distributed PV Generation in North and South Carolina

    NASA Astrophysics Data System (ADS)

    Holt, Kyra Moore

    The variability of renewable sources, such as wind and solar, when integrated into the electrical system must be compensated by traditional generation sources in-order to maintain the constant balance of supply and demand required for grid stability. The goal of this study is to analyze the effects of increasing large levels of solar Photovoltaic (PV) penetration (in terms of a percentage of annual energy production) on a test grid with similar characteristics to the Duke Energy Carolinas (DEC) and Progress Energy Carolinas (PEC) regions of North and South Carolina. PV production is modeled entering the system at the distribution level and regional PV capacity is based on household density. A gridded hourly global horizontal irradiance (GHI) dataset is used to capture the variable nature of PV generation. A unit commitment model (UCM) is then used determine the hourly dispatch of generators based on generator parameters and costs to supply generation to meet demand. Annual modeled results for six different scenarios are evaluated to determine technical, environmental and economic effects of varying levels of distributed PV penetration on the system. This study finds that the main limiting factor for PV integration in the DEC and PEC balancing authority regions is defined by the large generating capacity of base-load nuclear plants within the system. This threshold starts to affect system stability at integration levels of 5.7%. System errors, defined by imbalances caused by over or under generation with respect to demand, are identified in the model however the validity of these errors in real world context needs further examination due to the lack of high frequency irradiance data and modeling limitations. Operational system costs decreased as expected with PV integration although further research is needed to explore the impacts of the capital costs required to achieve the penetration levels found in this study. PV system generation was found to mainly displace coal generation creating a loss of revenue for generator owners. In all scenarios, CO 2 emissions were reduced with PV integration. This reduction could be used to meet impending EPA state-specific CO2 emissions targets.

  18. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning.

    PubMed

    Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru

    2018-04-17

    We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.

  19. Regional air quality impacts of increased natural gas production and use in Texas.

    PubMed

    Pacsi, Adam P; Alhajeri, Nawaf S; Zavala-Araiza, Daniel; Webster, Mort D; Allen, David T

    2013-04-02

    Natural gas use in electricity generation in Texas was estimated, for gas prices ranging from $1.89 to $7.74 per MMBTU, using an optimal power flow model. Hourly estimates of electricity generation, for individual electricity generation units, from the model were used to estimate spatially resolved hourly emissions from electricity generation. Emissions from natural gas production activities in the Barnett Shale region were also estimated, with emissions scaled up or down to match demand in electricity generation as natural gas prices changed. As natural gas use increased, emissions decreased from electricity generation and increased from natural gas production. Overall, NOx and SO2 emissions decreased, while VOC emissions increased as natural gas use increased. To assess the effects of these changes in emissions on ozone and particulate matter concentrations, spatially and temporally resolved emissions were used in a month-long photochemical modeling episode. Over the month-long photochemical modeling episode, decreases in natural gas prices typical of those experienced from 2006 to 2012 led to net regional decreases in ozone (0.2-0.7 ppb) and fine particulate matter (PM) (0.1-0.7 μg/m(3)). Changes in PM were predominantly due to changes in regional PM sulfate formation. Changes in regional PM and ozone formation are primarily due to decreases in emissions from electricity generation. Increases in emissions from increased natural gas production were offset by decreasing emissions from electricity generation for all the scenarios considered.

  20. Feedbacks Between Shallow Groundwater Dynamics and Surface Topography on Runoff Generation in Flat Fields

    NASA Astrophysics Data System (ADS)

    Appels, Willemijn M.; Bogaart, Patrick W.; van der Zee, Sjoerd E. A. T. M.

    2017-12-01

    In winter, saturation excess (SE) ponding is observed regularly in temperate lowland regions. Surface runoff dynamics are controlled by small topographical features that are unaccounted for in hydrological models. To better understand storage and routing effects of small-scale topography and their interaction with shallow groundwater under SE conditions, we developed a model of reduced complexity to investigate SE runoff generation, emphasizing feedbacks between shallow groundwater dynamics and mesotopography. The dynamic specific yield affected unsaturated zone water storage, causing rapid switches between negative and positive head and a flatter groundwater mound than predicted by analytical agrohydrological models. Accordingly, saturated areas were larger and local groundwater fluxes smaller than predicted, leading to surface runoff generation. Mesotopographic features routed water over larger distances, providing a feedback mechanism that amplified changes to the shape of the groundwater mound. This in turn enhanced runoff generation, but whether it also resulted in runoff events depended on the geometry and location of the depressions. Whereas conditions favorable to runoff generation may abound during winter, these feedbacks profoundly reduce the predictability of SE runoff: statistically identical rainfall series may result in completely different runoff generation. The model results indicate that waterlogged areas in any given rainfall event are larger than those predicted by current analytical groundwater models used for drainage design. This change in the groundwater mound extent has implications for crop growth and damage assessments.

  1. Expanding Stress Generation Theory: Test of a Transdiagnostic Model

    PubMed Central

    Conway, Christopher C.; Hammen, Constance; Brennan, Patricia A.

    2016-01-01

    Originally formulated to understand the recurrence of depressive disorders, the stress generation hypothesis has recently been applied in research on anxiety and externalizing disorders. Results from these investigations, in combination with findings of extensive comorbidity between depression and other mental disorders, suggest the need for an expansion of stress generation models to include the stress generating effects of transdiagnostic pathology as well as those of specific syndromes. Employing latent variable modeling techniques to parse the general and specific elements of commonly co-occurring Axis I syndromes, the current study examined the associations of transdiagnostic internalizing and externalizing dimensions with stressful life events over time. Analyses revealed that, after adjusting for the covariation between the dimensions, internalizing was a significant predictor of interpersonal dependent stress, whereas externalizing was a significant predictor of noninterpersonal dependent stress. Neither latent dimension was associated with the occurrence of independent, or fateful, stressful life events. At the syndrome level, once variance due to the internalizing factor was partialled out, unipolar depression contributed incrementally to the generation of interpersonal dependent stress. In contrast, the presence of panic disorder produced a “stress inhibition” effect, predicting reduced exposure to interpersonal dependent stress. Additionally, dysthymia was associated with an excess of noninterpersonal dependent stress. The latent variable modeling framework used here is discussed in terms of its potential as an integrative model for stress generation research. PMID:22428789

  2. Transient analysis of a superconducting AC generator using the compensated 2-D model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, Y.D.; Lee, H.W.; Lee, J.

    1999-09-01

    A SCG has many advantages over conventional generators, such as reduction in width and size, improvement in efficiency, and better steady-state stability. The paper presents a 2-D transient analysis of a superconducting AC generator (SCG) using the finite element method (FEM). The compensated 2-D model obtained by lengthening the airgap of the original 2-D model is proposed for the accurate and efficient transient analysis. The accuracy of the compensated 2-D model is verified by the small error 6.4% compared to experimental data. The transient characteristics of the 30 KVA SCG model have been investigated in detail and the damper performancemore » on various design parameters is examined.« less

  3. Deep Learning Fluid Mechanics

    NASA Astrophysics Data System (ADS)

    Barati Farimani, Amir; Gomes, Joseph; Pande, Vijay

    2017-11-01

    We have developed a new data-driven model paradigm for the rapid inference and solution of the constitutive equations of fluid mechanic by deep learning models. Using generative adversarial networks (GAN), we train models for the direct generation of solutions to steady state heat conduction and incompressible fluid flow without knowledge of the underlying governing equations. Rather than using artificial neural networks to approximate the solution of the constitutive equations, GANs can directly generate the solutions to these equations conditional upon an arbitrary set of boundary conditions. Both models predict temperature, velocity and pressure fields with great test accuracy (>99.5%). The application of our framework for inferring and generating the solutions of partial differential equations can be applied to any physical phenomena and can be used to learn directly from experiments where the underlying physical model is complex or unknown. We also have shown that our framework can be used to couple multiple physics simultaneously, making it amenable to tackle multi-physics problems.

  4. Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.

    2005-01-01

    The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.

  5. Several examples where turbulence models fail in inlet flow field analysis

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    1993-01-01

    Computational uncertainties in turbulence modeling for three dimensional inlet flow fields include flows approaching separation, strength of secondary flow field, three dimensional flow predictions of vortex liftoff, and influence of vortex-boundary layer interactions; computational uncertainties in vortex generator modeling include representation of generator vorticity field and the relationship between generator and vorticity field. The objectives of the inlet flow field studies presented in this document are to advance the understanding, prediction, and control of intake distortion and to study the basic interactions that influence this design problem.

  6. Continuum modeling of large lattice structures: Status and projections

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Mikulas, Martin M., Jr.

    1988-01-01

    The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.

  7. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  8. Generating Testable Questions in the Science Classroom: The BDC Model

    ERIC Educational Resources Information Center

    Tseng, ChingMei; Chen, Shu-Bi Shu-Bi; Chang, Wen-Hua

    2015-01-01

    Guiding students to generate testable scientific questions is essential in the inquiry classroom, but it is not easy. The purpose of the BDC ("Big Idea, Divergent Thinking, and Convergent Thinking") instructional model is to to scaffold students' inquiry learning. We illustrate the use of this model with an example lesson, designed…

  9. How to Create a 3D Model from Scanned Data in 5 Easy Steps

    NASA Technical Reports Server (NTRS)

    Hagen, Richard

    2017-01-01

    Additive manufacturing is a cost effective way to generate copies of damaged parts for demonstrations. Integrating scanned data of a damaged area into an existing model may be challenging. However, using the relatively inexpensive Nettfab software (from one can generate a "watertight" model that is easy to print.

  10. A Model for the Creation of Human-Generated Metadata within Communities

    ERIC Educational Resources Information Center

    Brasher, Andrew; McAndrew, Patrick

    2005-01-01

    This paper considers situations for which detailed metadata descriptions of learning resources are necessary, and focuses on human generation of such metadata. It describes a model which facilitates human production of good quality metadata by the development and use of structured vocabularies. Using examples, this model is applied to single and…

  11. Introductory Biology Students' Conceptual Models and Explanations of the Origin of Variation

    ERIC Educational Resources Information Center

    Bray Speth, Elena; Shaw, Neil; Momsen, Jennifer; Reinagel, Adam; Le, Paul; Taqieddin, Ranya; Long, Tammy

    2014-01-01

    Mutation is the key molecular mechanism generating phenotypic variation, which is the basis for evolution. In an introductory biology course, we used a model-based pedagogy that enabled students to integrate their understanding of genetics and evolution within multiple case studies. We used student-generated conceptual models to assess…

  12. Language Implications for Advertising in International Markets: A Model for Message Content and Message Execution.

    ERIC Educational Resources Information Center

    Beard, John; Yaprak, Attila

    A content analysis model for assessing advertising themes and messages generated primarily for United States markets to overcome barriers in the cultural environment of international markets was developed and tested. The model is based on three primary categories for generating, evaluating, and executing advertisements: rational, emotional, and…

  13. Monte Carlo simulation models of breeding-population advancement.

    Treesearch

    J.N. King; G.R. Johnson

    1993-01-01

    Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...

  14. An epidemiological modeling and data integration framework.

    PubMed

    Pfeifer, B; Wurz, M; Hanser, F; Seger, M; Netzer, M; Osl, M; Modre-Osprian, R; Schreier, G; Baumgartner, C

    2010-01-01

    In this work, a cellular automaton software package for simulating different infectious diseases, storing the simulation results in a data warehouse system and analyzing the obtained results to generate prediction models as well as contingency plans, is proposed. The Brisbane H3N2 flu virus, which has been spreading during the winter season 2009, was used for simulation in the federal state of Tyrol, Austria. The simulation-modeling framework consists of an underlying cellular automaton. The cellular automaton model is parameterized by known disease parameters and geographical as well as demographical conditions are included for simulating the spreading. The data generated by simulation are stored in the back room of the data warehouse using the Talend Open Studio software package, and subsequent statistical and data mining tasks are performed using the tool, termed Knowledge Discovery in Database Designer (KD3). The obtained simulation results were used for generating prediction models for all nine federal states of Austria. The proposed framework provides a powerful and easy to handle interface for parameterizing and simulating different infectious diseases in order to generate prediction models and improve contingency plans for future events.

  15. Some unexamined aspects of analysis of covariance in pretest-posttest studies.

    PubMed

    Ganju, Jitendra

    2004-09-01

    The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.

  16. Modeling discrete and rhythmic movements through motor primitives: a review.

    PubMed

    Degallier, Sarah; Ijspeert, Auke

    2010-10-01

    Rhythmic and discrete movements are frequently considered separately in motor control, probably because different techniques are commonly used to study and model them. Yet the increasing interest in finding a comprehensive model for movement generation requires bridging the different perspectives arising from the study of those two types of movements. In this article, we consider discrete and rhythmic movements within the framework of motor primitives, i.e., of modular generation of movements. In this way we hope to gain an insight into the functional relationships between discrete and rhythmic movements and thus into a suitable representation for both of them. Within this framework we can define four possible categories of modeling for discrete and rhythmic movements depending on the required command signals and on the spinal processes involved in the generation of the movements. These categories are first discussed in terms of biological concepts such as force fields and central pattern generators and then illustrated by several mathematical models based on dynamical system theory. A discussion on the plausibility of theses models concludes the work.

  17. Two general models that generate long range correlation

    NASA Astrophysics Data System (ADS)

    Gan, Xiaocong; Han, Zhangang

    2012-06-01

    In this paper we study two models that generate sequences with LRC (long range correlation). For the IFT (inverse Fourier transform) model, our conclusion is the low frequency part leads to LRC, while the high frequency part tends to eliminate it. Therefore, a typical method to generate a sequence with LRC is multiplying the spectrum of a white noise sequence by a decaying function. A special case is analyzed: the linear combination of a smooth curve and a white noise sequence, in which the DFA plot consists of two line segments. For the patch model, our conclusion is long subsequences leads to LRC, while short subsequences tend to eliminate it. Therefore, we can generate a sequence with LRC by using a fat-tailed PDF (probability distribution function) of the length of the subsequences. A special case is also analyzed: if a patch model with long subsequences is mixed with a white noise sequence, the DFA plot will consist of two line segments. We have checked known models and actual data, and found they are all consistent with this study.

  18. Modelling and simulation of wood chip combustion in a hot air generator system.

    PubMed

    Rajika, J K A T; Narayana, Mahinsasa

    2016-01-01

    This study focuses on modelling and simulation of horizontal moving bed/grate wood chip combustor. A standalone finite volume based 2-D steady state Euler-Euler Computational Fluid Dynamics (CFD) model was developed for packed bed combustion. Packed bed combustion of a medium scale biomass combustor, which was retrofitted from wood log to wood chip feeding for Tea drying in Sri Lanka, was evaluated by a CFD simulation study. The model was validated by the experimental results of an industrial biomass combustor for a hot air generation system in tea industry. Open-source CFD tool; OpenFOAM was used to generate CFD model source code for the packed bed combustion and simulated along with an available solver for free board region modelling in the CFD tool. Height of the packed bed is about 20 cm and biomass particles are assumed to be spherical shape with constant surface area to volume ratio. Temperature measurements of the combustor are well agreed with simulation results while gas phase compositions have discrepancies. Combustion efficiency of the validated hot air generator is around 52.2 %.

  19. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  20. A coupled stochastic rainfall-evapotranspiration model for hydrological impact analysis

    NASA Astrophysics Data System (ADS)

    Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.

    2018-02-01

    A hydrological impact analysis concerns the study of the consequences of certain scenarios on one or more variables or fluxes in the hydrological cycle. In such an exercise, discharge is often considered, as floods originating from extremely high discharges often cause damage. Investigating the impact of extreme discharges generally requires long time series of precipitation and evapotranspiration to be used to force a rainfall-runoff model. However, such kinds of data may not be available and one should resort to stochastically generated time series, even though the impact of using such data on the overall discharge, and especially on the extreme discharge events, is not well studied. In this paper, stochastically generated rainfall and corresponding evapotranspiration time series, generated by means of vine copulas, are used to force a simple conceptual hydrological model. The results obtained are comparable to the modelled discharge using observed forcing data. Yet, uncertainties in the modelled discharge increase with an increasing number of stochastically generated time series used. Notwithstanding this finding, it can be concluded that using a coupled stochastic rainfall-evapotranspiration model has great potential for hydrological impact analysis.

  1. Development of Automated Procedures to Generate Reference Building Models for ASHRAE Standard 90.1 and India’s Building Energy Code and Implementation in OpenStudio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Andrew; Haves, Philip; Jegi, Subhash

    This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.

  2. Creating historical range of variation (HRV) time series using landscape modeling: Overview and issues [Chapter 8

    Treesearch

    Robert E. Keane

    2012-01-01

    Simulation modeling can be a powerful tool for generating information about historical range of variation (HRV) in landscape conditions. In this chapter, I will discuss several aspects of the use of simulation modeling to generate landscape HRV data, including (1) the advantages and disadvantages of using simulation, (2) a brief review of possible landscape models. and...

  3. A Generational Approach to Understanding Students

    ERIC Educational Resources Information Center

    Coomes, Michael D.; DeBard, Robert

    2004-01-01

    This chapter establishes the conceptual framework for understanding the Millennial generation by presenting a theoretical model of generational succession that demonstrates the value of studying how the values of one generation interact with and are influenced by others.

  4. Workshop on Grid Generation and Related Areas

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A collection of papers given at the Workshop on Grid Generation and Related Areas is presented. The purpose of this workshop was to assemble engineers and scientists who are currently working on grid generation for computational fluid dynamics (CFD), surface modeling, and related areas. The objectives were to provide an informal forum on grid generation and related topics, to assess user experience, to identify needs, and to help promote synergy among engineers and scientists working in this area. The workshop consisted of four sessions representative of grid generation and surface modeling research and application within NASA LeRC. Each session contained presentations and an open discussion period.

  5. An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Gramatová, Elena

    2015-07-01

    The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.

  6. Template-free modeling by LEE and LEER in CASP11.

    PubMed

    Joung, InSuk; Lee, Sun Young; Cheng, Qianyi; Kim, Jong Yun; Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    For the template-free modeling of human targets of CASP11, we utilized two of our modeling protocols, LEE and LEER. The LEE protocol took CASP11-released server models as the input and used some of them as templates for 3D (three-dimensional) modeling. The template selection procedure was based on the clustering of the server models aided by a community detection method of a server-model network. Restraining energy terms generated from the selected templates together with physical and statistical energy terms were used to build 3D models. Side-chains of the 3D models were rebuilt using target-specific consensus side-chain library along with the SCWRL4 rotamer library, which completed the LEE protocol. The first success factor of the LEE protocol was due to efficient server model screening. The average backbone accuracy of selected server models was similar to that of top 30% server models. The second factor was that a proper energy function along with our optimization method guided us, so that we successfully generated better quality models than the input template models. In 10 out of 24 cases, better backbone structures than the best of input template structures were generated. LEE models were further refined by performing restrained molecular dynamics simulations to generate LEER models. CASP11 results indicate that LEE models were better than the average template models in terms of both backbone structures and side-chain orientations. LEER models were of improved physical realism and stereo-chemistry compared to LEE models, and they were comparable to LEE models in the backbone accuracy. Proteins 2016; 84(Suppl 1):118-130. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.

    PubMed

    Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M

    2016-02-01

    To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  8. Multi-Agent simulation of generation capacity expansion decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botterud, A.; Mahalik, M.; Conzelmann, G.

    2008-01-01

    In this paper, we use a multi-agent simulation model, EMCAS, to analyze generation expansion in the Iberian electricity market. The expansion model simulates generation investment decisions of decentralized generating companies (GenCos) interacting in a complex, multidimensional environment. A probabilistic dispatch algorithm calculates prices and profits for new candidate units in different future states of the system. Uncertainties in future load, hydropower conditions, and competitorspsila actions are represented in a scenario tree, and decision analysis is used to identify the optimal expansion decision for each individual GenCo. We run the model using detailed data for the Iberian market. In a scenariomore » analysis, we look at the impact of market design variables, such as the energy price cap and carbon emission prices. We also analyze how market concentration and GenCospsila risk preferences influence the timing and choice of new generating capacity.« less

  9. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  10. Virulo

    EPA Science Inventory

    Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...

  11. Natural language generation of surgical procedures.

    PubMed

    Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R

    1999-01-01

    A number of compositional Medical Concept Representation systems are being developed. Although these provide for a detailed conceptual representation of the underlying information, they have to be translated back to natural language for used by end-users and applications. The GALEN programme has been developing one such representation and we report here on a tool developed to generate natural language phrases from the GALEN conceptual representations. This tool can be adapted to different source modelling schemes and to different destination languages or sublanguages of a domain. It is based on a multilingual approach to natural language generation, realised through a clean separation of the domain model from the linguistic model and their link by well defined structures. Specific knowledge structures and operations have been developed for bridging between the modelling 'style' of the conceptual representation and natural language. Using the example of the scheme developed for modelling surgical operative procedures within the GALEN-IN-USE project, we show how the generator is adapted to such a scheme. The basic characteristics of the surgical procedures scheme are presented together with the basic principles of the generation tool. Using worked examples, we discuss the transformation operations which change the initial source representation into a form which can more directly be translated to a given natural language. In particular, the linguistic knowledge which has to be introduced--such as definitions of concepts and relationships is described. We explain the overall generator strategy and how particular transformation operations are triggered by language-dependent and conceptual parameters. Results are shown for generated French phrases corresponding to surgical procedures from the urology domain.

  12. Suction forces generated by passive bile bag drainage on a model of post-subdural hematoma evacuation.

    PubMed

    Tenny, Steven O; Thorell, William E

    2018-05-05

    Passive drainage systems are commonly used after subdural hematoma evacuation but there is a dearth of published data regarding the suction forces created. We set out to quantify the suction forces generated by a passive drainage system. We created a model of passive drainage after subdural hematoma evacuation. We measured the maximum suction force generated with a bile bag drain for both empty drain tubing and fluid-filled drain tube causing a siphoning effect. We took measurements at varying heights of the bile bag to analyze if bile bag height changed suction forces generated. An empty bile bag with no fluid in the drainage tube connected to a rigid, fluid-filled model creates minimal suction force of 0.9 mmHg (95% CI 0.64-1.16 mmHg). When fluid fills the drain tubing, a siphoning effect is created and can generate suction forces ranging from 18.7 to 30.6 mmHg depending on the relative position of the bile bag and filled amount of the bile bag. The suction forces generated are statistically different if the bile bag is 50 cm below, level with or 50 cm above the experimental model. Passive bile bag drainage does not generate significant suction on a fluid-filled rigid model if the drain tubing is empty. If fluid fills the drain tubing then siphoning occurs and can increase the suction force of a passive bile bag drainage system to levels comparable to partially filled Jackson-Pratt bulb drainage.

  13. GENSURF: A mesh generator for 3D finite element analysis of surface and corner cracks in finite thickness plates subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.

    1992-01-01

    A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.

  14. A generative tool for building health applications driven by ISO 13606 archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás

    2012-10-01

    The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.

  15. U.S. Patent Pending, Cyberspace Security System for Complex Systems, U.S. Patent Application No.: 14/134,949

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    A computer implemented method monetizes the security of a cyber-system in terms of losses each stakeholder may expect to lose if a security break down occurs. A non-transitory media stores instructions for generating a stake structure that includes costs that each stakeholder of a system would lose if the system failed to meet security requirements and generating a requirement structure that includes probabilities of failing requirements when computer components fails. The system generates a vulnerability model that includes probabilities of a component failing given threats materializing and generates a perpetrator model that includes probabilities of threats materializing. The system generatesmore » a dot product of the stakes structure, the requirement structure, the vulnerability model and the perpetrator model. The system can further be used to compare, contrast and evaluate alternative courses of actions best suited for the stakeholders and their requirements.« less

  16. Heating of solid targets with laser pulses

    NASA Technical Reports Server (NTRS)

    Bechtel, J. H.

    1975-01-01

    Analytical and numerical solutions to the heat-conduction equation are obtained for the heating of absorbing media with pulsed lasers. The spatial and temporal form of the temperature is determined using several different models of the laser irradiance. Both surface and volume generation of heat are discussed. It is found that if the depth of thermal diffusion for the laser-pulse duration is large compared to the optical-attenuation depth, the surface- and volume-generation models give nearly identical results. However, if the thermal-diffusion depth for the laser-pulse duration is comparable to or less than the optical-attenuation depth, the surface-generation model can give significantly different results compared to the volume-generation model. Specific numerical results are given for a tungsten target irradiated by pulses of different temporal durations and the implications of the results are discussed with respect to the heating of metals by picosecond laser pulses.

  17. NASA Workshop on future directions in surface modeling and grid generation

    NASA Technical Reports Server (NTRS)

    Vandalsem, W. R.; Smith, R. E.; Choo, Y. K.; Birckelbaw, L. D.; Vogel, A. A.

    1992-01-01

    Given here is a summary of the paper sessions and panel discussions of the NASA Workshop on Future Directions in Surface Modeling and Grid Generation held a NASA Ames Research Center, Moffett Field, California, December 5-7, 1989. The purpose was to assess U.S. capabilities in surface modeling and grid generation and take steps to improve the focus and pace of these disciplines within NASA. The organization of the workshop centered around overviews from NASA centers and expert presentations from U.S. corporations and universities. Small discussion groups were held and summarized by group leaders. Brief overviews and a panel discussion by representatives from the DoD were held, and a NASA-only session concluded the meeting. In the NASA Program Planning Session summary there are five recommended steps for NASA to take to improve the development and application of surface modeling and grid generation.

  18. Sensitivity of Regional Hydropower Generation to the Projected Changes in Future Watershed Hydrology

    NASA Astrophysics Data System (ADS)

    Kao, S. C.; Naz, B. S.; Gangrade, S.

    2015-12-01

    Hydropower is a key contributor to the renewable energy portfolio due to its established development history and the diverse benefits it provides to the electric power systems. With the projected change in the future watershed hydrology, including shift of snowmelt timing, increasing occurrence of extreme precipitation, and change in drought frequencies, there is a need to investigate how the regional hydropower generation may change correspondingly. To evaluate the sensitivity of watershed storage and hydropower generation to future climate change, a lumped Watershed Runoff-Energy Storage (WRES) model is developed to simulate the annual and seasonal hydropower generation at various hydropower areas in the United States. For each hydropower study area, the WRES model use the monthly precipitation and naturalized (unregulated) runoff as inputs to perform a runoff mass balance calculation for the total monthly runoff storage in all reservoirs and retention facilities in the watershed, and simulate the monthly regulated runoff release and hydropower generation through the system. The WRES model is developed and calibrated using the historic (1980-2009) monthly precipitation, runoff, and generation data, and then driven by a large set of dynamically- and statistically-downscaled Coupled Model Intercomparison Project Phase 5 climate projections to simulate the change of watershed storage and hydropower generation under different future climate scenarios. The results among different hydropower regions, storage capacities, emission scenarios, and timescales are compared and discussed in this study.

  19. Forecasting municipal solid waste generation using artificial intelligence modelling approaches.

    PubMed

    Abbasi, Maryam; El Hanandeh, Ali

    2016-10-01

    Municipal solid waste (MSW) management is a major concern to local governments to protect human health, the environment and to preserve natural resources. The design and operation of an effective MSW management system requires accurate estimation of future waste generation quantities. The main objective of this study was to develop a model for accurate forecasting of MSW generation that helps waste related organizations to better design and operate effective MSW management systems. Four intelligent system algorithms including support vector machine (SVM), adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and k-nearest neighbours (kNN) were tested for their ability to predict monthly waste generation in the Logan City Council region in Queensland, Australia. Results showed artificial intelligence models have good prediction performance and could be successfully applied to establish municipal solid waste forecasting models. Using machine learning algorithms can reliably predict monthly MSW generation by training with waste generation time series. In addition, results suggest that ANFIS system produced the most accurate forecasts of the peaks while kNN was successful in predicting the monthly averages of waste quantities. Based on the results, the total annual MSW generated in Logan City will reach 9.4×10(7)kg by 2020 while the peak monthly waste will reach 9.37×10(6)kg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Multilingual natural language generation as part of a medical terminology server.

    PubMed

    Wagner, J C; Solomon, W D; Michel, P A; Juge, C; Baud, R H; Rector, A L; Scherrer, J R

    1995-01-01

    Re-usable and sharable, and therefore language-independent concept models are of increasing importance in the medical domain. The GALEN project (Generalized Architecture for Languages Encyclopedias and Nomenclatures in Medicine) aims at developing language-independent concept representation systems as the foundations for the next generation of multilingual coding systems. For use within clinical applications, the content of the model has to be mapped to natural language. A so-called Multilingual Information Module (MM) establishes the link between the language-independent concept model and different natural languages. This text generation software must be versatile enough to cope at the same time with different languages and with different parts of a compositional model. It has to meet, on the one hand, the properties of the language as used in the medical domain and, on the other hand, the specific characteristics of the underlying model and its representation formalism. We propose a semantic-oriented approach to natural language generation that is based on linguistic annotations to a concept model. This approach is realized as an integral part of a Terminology Server, built around the concept model and offering different terminological services for clinical applications.

  1. Spectral analysis of a two-species competition model: Determining the effects of extreme conditions on the color of noise generated from simulated time series

    NASA Astrophysics Data System (ADS)

    Golinski, M. R.

    2006-07-01

    Ecologists have observed that environmental noise affects population variance in the logistic equation for one-species growth. Interactions between deterministic and stochastic dynamics in a one-dimensional system result in increased variance in species population density over time. Since natural populations do not live in isolation, the present paper simulates a discrete-time two-species competition model with environmental noise to determine the type of colored population noise generated by extreme conditions in the long-term population dynamics of competing populations. Discrete Fourier analysis is applied to the simulation results and the calculated Hurst exponent ( H) is used to determine how the color of population noise for the two species corresponds to extreme conditions in population dynamics. To interpret the biological meaning of the color of noise generated by the two-species model, the paper determines the color of noise generated by three reference models: (1) A two-dimensional discrete-time white noise model (0⩽ H<1/2); (2) A two-dimensional fractional Brownian motion model (H=1/2); and (3) A two-dimensional discrete-time model with noise for unbounded growth of two uncoupled species (1/2< H⩽1).

  2. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Rapid Crop Cover Mapping for the Conterminous United States.

    PubMed

    Dahal, Devendra; Wylie, Bruce; Howard, Danny

    2018-06-05

    Timely crop cover maps with sufficient resolution are important components to various environmental planning and research applications. Through the modification and use of a previously developed crop classification model (CCM), which was originally developed to generate historical annual crop cover maps, we hypothesized that such crop cover maps could be generated rapidly during the growing season. Through a process of incrementally removing weekly and monthly independent variables from the CCM and implementing a 'two model mapping' approach, we found it viable to generate conterminous United States-wide rapid crop cover maps at a resolution of 250 m for the current year by the month of September. In this approach, we divided the CCM model into one 'crop type model' to handle the classification of nine specific crops and a second, binary model to classify the presence or absence of 'other' crops. Under the two model mapping approach, the training errors were 0.8% and 1.5% for the crop type and binary model, respectively, while test errors were 5.5% and 6.4%, respectively. With spatial mapping accuracies for annual maps reaching upwards of 70%, this approach demonstrated a strong potential for generating rapid crop cover maps by the 1 st of September.

  4. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  5. A geomorphic approach to 100-year floodplain mapping for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth

    2018-06-01

    Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.

  6. Integrating machine learning techniques into robust data enrichment approach and its application to gene expression data.

    PubMed

    Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas

    2013-01-01

    The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.

  7. Next-Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible

  8. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William; Fitzgerald, Matthew; Stahl, Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.

  9. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.

  10. Biogenic Methane Generation Potential in the Eastern Nankai Trough, Japan: Effect of Reaction Temperature and Total Organic Carbon

    NASA Astrophysics Data System (ADS)

    Aung, T. T.; Fujii, T.; Amo, M.; Suzuki, K.

    2017-12-01

    Understanding potential of methane flux from the Pleistocene fore-arc basin filled turbiditic sedimentary formation along the eastern Nankai Trough is important in the quantitative assessment of gas hydrate resources. We considered generated methane could exist in sedimentary basin in the forms of three major components, and those are methane in methane hydrate, free gas and methane dissolved in water. Generation of biomethane strongly depends on microbe activity and microbes in turn survive in diverse range of temperature, salinity and pH. This study aims to understand effect of reaction temperature and total organic carbon on generation of biomethane and its components. Biomarker analysis and cultural experiment results of the core samples from the eastern Nankai Trough reveal that methane generation rate gets peak at various temperature ranging12.5°to 35°. Simulation study of biomethane generation was made using commercial basin scale simulator, PetroMod, with different reaction temperature and total organic carbon to predict how these effect on generation of biomethane. Reaction model is set by Gaussian distribution with constant hydrogen index and standard deviation of 1. Series of simulation cases with peak reaction temperature ranging 12.5°to 35° and total organic carbon of 0.6% to 3% were conducted and analyzed. Simulation results show that linear decrease in generation potential while increasing reaction temperature. But decreasing amount becomes larger in the model with higher total organic carbon. At higher reaction temperatures, >30°, extremely low generation potential was found. This is due to the fact that the source formation modeled is less than 1 km in thickness and most of formation do not reach temperature more than 30°. In terms of the components, methane in methane hydrate and free methane increase with increasing TOC. Drastic increase in free methane was observed in the model with 3% of TOC. Methane amount dissolved in water shows almost same for all models.

  11. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  12. Generating Performance Models for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less

  13. Integrated Control Modeling for Propulsion Systems Using NPSS

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Felder, James L.; Lavelle, Thomas M.; Withrow, Colleen A.; Yu, Albert Y.; Lehmann, William V. A.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS), an advanced engineering simulation environment used to design and analyze aircraft engines, has been enhanced by integrating control development tools into it. One of these tools is a generic controller interface that allows NPSS to communicate with control development software environments such as MATLAB and EASY5. The other tool is a linear model generator (LMG) that gives NPSS the ability to generate linear, time-invariant state-space models. Integrating these tools into NPSS enables it to be used for control system development. This paper will discuss the development and integration of these tools into NPSS. In addition, it will show a comparison of transient model results of a generic, dual-spool, military-type engine model that has been implemented in NPSS and Simulink. It will also show the linear model generator s ability to approximate the dynamics of a nonlinear NPSS engine model.

  14. Selective interference with image retention and generation: evidence for the workspace model.

    PubMed

    van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio

    2009-08-01

    We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.

  15. Generation of wavy structure on lipid membrane by peripheral proteins: a linear elastic analysis.

    PubMed

    Mahata, Paritosh; Das, Sovan Lal

    2017-05-01

    We carry out a linear elastic analysis to study wavy structure generation on lipid membrane by peripheral membrane proteins. We model the lipid membrane as linearly elastic and anisotropic material. The hydrophobic insertion by proteins into the lipid membrane has been idealized as penetration of rigid rod-like inclusions into the membrane and the electrostatic interaction between protein and membrane has been modeled by a distributed surface traction acting on the membrane surface. With the proposed model we study curvature generation by several binding domains of peripheral membrane proteins containing BAR domains and amphipathic alpha-helices. It is observed that electrostatic interaction is essential for curvature generation by the BAR domains. © 2017 Federation of European Biochemical Societies.

  16. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  17. Timing and petroleum sources for the Lower Cretaceous Mannville Group oil sands of northern Alberta based on 4-D modeling

    USGS Publications Warehouse

    Higley, D.K.; Lewan, M.D.; Roberts, L.N.R.; Henry, M.

    2009-01-01

    The Lower Cretaceous Mannville Group oil sands of northern Alberta have an estimated 270.3 billion m3 (BCM) (1700 billion bbl) of in-place heavy oil and tar. Our study area includes oil sand accumulations and downdip areas that partially extend into the deformation zone in western Alberta. The oil sands are composed of highly biodegraded oil and tar, collectively referred to as bitumen, whose source remains controversial. This is addressed in our study with a four-dimensional (4-D) petroleum system model. The modeled primary trap for generated and migrated oil is subtle structures. A probable seal for the oil sands was a gradual updip removal of the lighter hydrocarbon fractions as migrated oil was progressively biodegraded. This is hypothetical because the modeling software did not include seals resulting from the biodegradation of oil. Although the 4-D model shows that source rocks ranging from the Devonian-Mississippian Exshaw Formation to the Lower Cretaceous Mannville Group coals and Ostracode-zone-contributed oil to Mannville Group reservoirs, source rocks in the Jurassic Fernie Group (Gordondale Member and Poker Chip A shale) were the initial and major contributors. Kinetics associated with the type IIS kerogen in Fernie Group source rocks resulted in the early generation and expulsion of oil, as early as 85 Ma and prior to the generation from the type II kerogen of deeper and older source rocks. The modeled 50% peak transformation to oil was reached about 75 Ma for the Gordondale Member and Poker Chip A shale near the west margin of the study area, and prior to onset about 65 Ma from other source rocks. This early petroleum generation from the Fernie Group source rocks resulted in large volumes of generated oil, and prior to the Laramide uplift and onset of erosion (???58 Ma), which curtailed oil generation from all source rocks. Oil generation from all source rocks ended by 40 Ma. Although the modeled study area did not include possible western contributions of generated oil to the oil sands, the amount generated by the Jurassic source rocks within the study area was 475 BCM (2990 billion bbl). Copyright ?? 2009. The American Association of Petroleum Geologists. All rights reserved.

  18. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  19. Hybrid generative-discriminative human action recognition by combining spatiotemporal words with supervised topic models

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Wang, Cheng; Wang, Boliang

    2011-02-01

    We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.

  20. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  1. Modeling and Simulation of the Economics of Mining in the Bitcoin Market.

    PubMed

    Cocco, Luisanna; Marchesi, Michele

    2016-01-01

    In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shaobu; Lu, Shuai; Zhou, Ning

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highestmore » similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.« less

  3. 3D molecular models of whole HIV-1 virions generated with cellPACK

    PubMed Central

    Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.

    2014-01-01

    As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262

  4. Generation and detection of plasmonic nanobubbles in zebrafish.

    PubMed

    Lukianova-Hleb, E Y; Santiago, C; Wagner, D S; Hafner, J H; Lapotko, D O

    2010-06-04

    The zebrafish embryo has been evaluated as an in vivo model for plasmonic nanobubble (PNB) generation and detection at nanoscale. The embryo is easily observed and manipulated utilizing the same methodology as for application of PNBs in vitro. Injection and irradiation of gold nanoparticles with a short laser pulse resulted in generation of PNBs in zebrafish with similar parameters as for PNBs generated in water and cultured living cells. These PNBs do not result in systemic damage, thus we demonstrated an in vivo model for rapid and precise testing of plasmonic nanotechnologies.

  5. Examples of grid generation with implicitly specified surfaces using GridPro (TM)/az3000. 1: Filleted multi-tube configurations

    NASA Technical Reports Server (NTRS)

    Cheng, Zheming; Eiseman, Peter R.

    1995-01-01

    With examples, we illustrate how implicitly specified surfaces can be used for grid generation with GridPro/az3000. The particular examples address two questions: (1) How do you model intersecting tubes with fillets? and (2) How do you generate grids inside the intersected tubes? The implication is much more general. With the results in a forthcoming paper which develops an easy-to-follow procedure for implicit surface modeling, we provide a powerful means for rapid prototyping in grid generation.

  6. Model for Generation of Neutrons in a Compact Diode with Laser-Plasma Anode and Suppression of Electron Conduction Using a Permanent Cylindrical Magnet

    NASA Astrophysics Data System (ADS)

    Shikanov, A. E.; Vovchenko, E. D.; Kozlovskii, K. I.; Rashchikov, V. I.; Shatokhin, V. L.

    2018-04-01

    A model for acceleration of deuterons and generation of neutrons in a compact laser-plasma diode with electron isolation using magnetic field generated by a hollow cylindrical permanent magnet is presented. Experimental and computer-simulated neutron yields are compared for the diode structure under study. An accelerating neutron tube with a relatively high neutron generation efficiency can be constructed using suppression of electron conduction with the aid of a magnet placed in the vacuum volume.

  7. Table-driven software architecture for a stitching system

    NASA Technical Reports Server (NTRS)

    Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)

    2001-01-01

    Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.

  8. A Probabilistic Model of Illegal Drug Trafficking Operations in the Eastern Pacific and Caribbean Sea

    DTIC Science & Technology

    2013-09-01

    partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map

  9. Automation on the generation of genome-scale metabolic models.

    PubMed

    Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández

    2012-12-01

    Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.

  10. Rethinking the Default Construction of Multimodel Climate Ensembles

    DOE PAGES

    Rauser, Florian; Gleckler, Peter; Marotzke, Jochem

    2015-07-21

    Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less

  11. Automatic generation of nursing narratives from entity-attribute-value triplet for electronic nursing records system.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Lee, Joo Yun; Jo, Soo Jung; Jeon, Eunjoo; Byeon, Namsoo; Choi, Seung Yong; Chung, Eunja

    2014-01-01

    The aim of this study is to develop and evaluate a natural language generation system to populate nursing narratives using detailed clinical models. Semantic, contextual, and syntactical knowledges were extracted. A natural language generation system linking these knowledges was developed. The quality of generated nursing narratives was evaluated by the three nurse experts using a five-point rating scale. With 82 detailed clinical models, in total 66,888 nursing narratives in four different types of statement were generated. The mean scores for overall quality was 4.66, for content 4.60, for grammaticality 4.40, for writing style 4.13, and for correctness 4.60. The system developed in this study generated nursing narratives with different levels of granularity. The generated nursing narratives can improve semantic interoperability of nursing data documented in nursing records.

  12. Computational model for calculating the dynamical behaviour of generators caused by unbalanced magnetic pull and experimental validation

    NASA Astrophysics Data System (ADS)

    Pennacchi, Paolo

    2008-04-01

    The modelling of the unbalanced magnetic pull (UMP) in generators and the experimental validation of the proposed method are presented in this paper. The UMP is one of the most remarkable effects of electromechanical interactions in rotating machinery. As a consequence of the rotor eccentricity, the imbalance of the electromagnetic forces acting between rotor and stator generates a net radial force. This phenomenon can be avoided by means of a careful assembly and manufacture in small and stiff machines, like electrical motors. On the contrary, the eccentricity of the active part of the rotor with respect to the stator is unavoidable in big generators of power plants, because they operate above their first critical speed and are supported by oil-film bearings. In the first part of the paper, a method aimed to calculate the UMP force is described. This model is more general than those available in literature, which are limited to circular orbits. The model is based on the actual position of the rotor inside the stator, therefore on the actual air-gap distribution, regardless of the orbit type. The closed form of the nonlinear UMP force components is presented. In the second part, the experimental validation of the proposed model is presented. The dynamical behaviour in the time domain of a steam turbo-generator of a power plant is considered and it is shown that the model is able to reproduce the dynamical effects due to the excitation of the magnetic field in the generator.

  13. An analytical and numerical study of Galton-Watson branching processes relevant to population dynamics

    NASA Astrophysics Data System (ADS)

    Jang, Sa-Han

    Galton-Watson branching processes of relevance to human population dynamics are the subject of this thesis. We begin with an historical survey of the invention of the invention of this model in the middle of the 19th century, for the purpose of modelling the extinction of unusual surnames in France and Britain. We then review the principal developments and refinements of this model, and their applications to a wide variety of problems in biology and physics. Next, we discuss in detail the case where the probability generating function for a Galton-Watson branching process is a geometric series, which can be summed in closed form to yield a fractional linear generating function that can be iterated indefinitely in closed form. We then describe the matrix method of Keyfitz and Tyree, and use it to determine how large a matrix must be chosen to model accurately a Galton-Watson branching process for a very large number of generations, of the order of hundreds or even thousands. Finally, we show that any attempt to explain the recent evidence for the existence thousands of generations ago of a 'mitochondrial Eve' and a 'Y-chromosomal Adam' in terms of a the standard Galton-Watson branching process, or indeed any statistical model that assumes equality of probabilities of passing one's genes to one's descendents in later generations, is unlikely to be successful. We explain that such models take no account of the advantages that the descendents of the most successful individuals in earlier generations enjoy over their contemporaries, which must play a key role in human evolution.

  14. Generating High Resolution Climate Scenarios Through Regional Climate Modelling Over Southern Africa

    NASA Astrophysics Data System (ADS)

    Ndhlovu, G. Z.; Woyessa, Y. E.; Vijayaraghavan, S.

    2017-12-01

    limate change has impacted the global environment and the Continent of Africa, especially Southern Africa, regarded as one of the most vulnerable regions in Africa, has not been spared from these impacts. Global Climate Models (GCMs) with coarse horizontal resolutions of 150-300 km do not provide sufficient details at the local basin scale due to mismatch between the size of river basins and the grid cell of the GCM. This makes it difficult to apply the outputs of GCMs directly to impact studies such as hydrological modelling. This necessitates the use of regional climate modelling at high resolutions that provide detailed information at regional and local scales to study both climate change and its impacts. To this end, an experiment was set up and conducted with PRECIS, a regional climate model, to generate climate scenarios at a high resolution of 25km for the local region in Zambezi River basin of Southern Africa. The major input data used included lateral and surface boundary conditions based on the GCMs. The data is processed, analysed and compared with CORDEX climate change project data generated for Africa. This paper, highlights the major differences of the climate scenarios generated by PRECIS Model and CORDEX Project for Africa and further gives recommendations for further research on generation of climate scenarios. The climatic variables such as precipitation and temperatures have been analysed for flood and droughts in the region. The paper also describes the setting up and running of an experiment using a high-resolution PRECIS model. In addition, a description has been made in running the model and generating the output variables on a sub basin scale. Regional climate modelling which provides information on climate change impact may lead to enhanced understanding of adaptive water resources management. Understanding the regional climate modelling results on sub basin scale is the first step in analysing complex hydrological processes and a basis for designing of adaptation and mitigation strategies in the region. Key words: Climate change, regional climate modelling, hydrological processes, extremes, scenarios [1] Corresponding author: Email:gndhlovu@cut.ac.za Tel:+27 (0) 51 507 3072

  15. Influence of Rainfall Product on Hydrological and Sediment Outputs when Calibrating the STREAP Rainfall Generator for the CAESAR-Lisflood Landscape Evolution Model

    NASA Astrophysics Data System (ADS)

    Skinner, Christopher; Peleg, Nadav; Quinn, Niall

    2017-04-01

    The use of Landscape Evolution Models often requires a timeseries of rainfall to drive the model. The spatial and temporal resolution of the driving data has an impact on several model outputs, including the shape of the landscape itself. Attempts to compensate for the spatiotemporal smoothing of local rainfall intensities are insufficient and may exacerbate these issues, meaning that to produce the best results the model needs to be run with data of highest spatial and temporal resolutions available. Some rainfall generators are able to produce timeseries with high spatial and temporal resolution. Observed data is used for the calibration of these generators. However, rainfall observations are highly uncertain and vary between different products (e.g. raingauges, weather radar) which may cascade through the Landscape Evolution Model. Here, we used the STREAP rainfall generator to produce high spatial (1km) and temporal (hourly) resolution ensembles of rainfall for a 50-year period, and used these to drive the CAESAR-Lisflood Landscape Evolution Model for a test catchment. Three different calibrations of STREAP were used against different products: gridded raingauge (TBR), weather radar (NIMROD), and a merged of the two. Analysis of the discharge and sediment yields from the model runs showed that the models run by STREAP calibrated by the different products were statistically significantly different, with the raingauge calibration producing 12.4 % more sediment on average over the 50-year period. The merged product produced results which were between the raingauge and radar products. The results demonstrate the importance of considering the selection of rainfall driving data on Landscape Evolution Modelling. Rainfall products are highly uncertain, different instruments will observe rainfall differently, and these uncertainties are clearly shown to cascade through the calibration of the rainfall generator and the Landscape Evolution Model. Merging raingauge and radar products is a common practise operationally, and by using features of both to calibrate the rainfall generator it is likely a more robust rainfall timeseries is produced.

  16. Models for nearly every occasion: Part I - One box models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-01-01

    The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.

  17. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button

    PubMed Central

    2010-01-01

    Background There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. Methods The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS’ generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This ‘model-driven’ method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. Results In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist’s satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the ‘ExtractModel’ procedure. Conclusions The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org. PMID:21210979

  18. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  19. Municipal solid waste generation in municipalities: Quantifying impacts of household structure, commercial waste and domestic fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebersorger, S.; Beigl, P., E-mail: peter.beigl@boku.ac.at

    Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions aremore » met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation).« less

  20. Municipal solid waste generation in municipalities: quantifying impacts of household structure, commercial waste and domestic fuel.

    PubMed

    Lebersorger, S; Beigl, P

    2011-01-01

    Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions are met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation). Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Creatinine generation from kinetic modeling with or without postdialysis serum creatinine measurement: results from the HEMO study.

    PubMed

    Daugirdas, John T; Depner, Thomas A

    2017-11-01

    A convenient method to estimate the creatinine generation rate and measures of creatinine clearance in hemodialysis patients using formal kinetic modeling and standard pre- and postdialysis blood samples has not been described. We used data from 366 dialysis sessions characterized during follow-up month 4 of the HEMO study, during which cross-dialyzer clearances for both urea and creatinine were available. Blood samples taken at 1 h into dialysis and 30 min and 60 min after dialysis were used to determine how well a two-pool kinetic model could predict creatinine concentrations and other kinetic parameters, including the creatinine generation rate. An extrarenal creatinine clearance of 0.038 l/kg/24 h was included in the model. Diffusive cross-dialyzer clearances of urea [230 (SD 37 mL/min] correlated well (R2 = 0.78) with creatinine clearances [164 (SD 30) mL/min]. When the effective diffusion volume flow rate was set at 0.791 times the blood flow rate for the cross-dialyzer clearance measurements at 1 h into dialysis, the mean calculated volume of creatinine distribution averaged 29.6 (SD 7.2) L], compared with 31.6 (SD 7.0) L for urea (P < 0.01). The modeled creatinine generation rate [1183 (SD 463) mg/day] averaged 100.1 % (SD 29; median 99.3) of that predicted in nondialysis patients by an anthropometric equation. A simplified method for modeling the creatinine generation rate using the urea distribution volume and urea dialyzer clearance without use of the postdialysis serum creatinine measurement gave results for creatinine generation rate [1187 (SD 475) mg/day; that closely matched the value calculated using the formally modeled value, R2 = 0.971]. Our analysis confirms previous findings of similar distribution volumes for creatinine and urea. After taking extra-renal clearance into consideration, the creatinine generation rate in dialysis patients is similar to that in nondialysis patients. A simplified method based on urea clearance and urea distribution volume not requiring a postdialysis serum creatinine measurement can be used to yield creatinine generation rates that closely match those determined from standard modeling. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  2. System and method for anomaly detection

    DOEpatents

    Scherrer, Chad

    2010-06-15

    A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.

  3. Modeling and simulation research on electromagnetic and energy-recycled damper based on Adams

    NASA Astrophysics Data System (ADS)

    Zhou, C. F.; Zhang, K.; Zhang, Pengfei

    2018-05-01

    In order to study the voltage and power output characteristics of the electromagnetic and energy-recycled damper which consists of gear, rack and generator, the Adams model of this damper and the Simulink model of generator are established, and the co-simulation is accomplished with these two models. The output indexes such as the gear speed and power of generator are obtained by the simulation, and the simulation results demonstrate that the voltage peak of the damper is 25 V; the maximum output power of the damper is 8 W. The above research provides a basis for the prototype development of electromagnetic and energy-recycled damper with gear and rack.

  4. A neural model of rule generation in inductive reasoning.

    PubMed

    Rasmussen, Daniel; Eliasmith, Chris

    2011-01-01

    Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects. Copyright © 2011 Cognitive Science Society, Inc.

  5. Modelling ultrasound guided wave propagation for plate thickness measurement

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Dabak, Anand; Murthy, Nitish Krishna

    2014-03-01

    Structural Health monitoring refers to monitoring the health of plate-like walls of large reactors, pipelines and other structures in terms of corrosion detection and thickness estimation. The objective of this work is modeling the ultrasonic guided waves generated in a plate. The piezoelectric is excited by an input pulse to generate ultrasonic guided lamb waves in the plate that are received by another piezoelectric transducer. In contrast with existing methods, we develop a mathematical model of the direct component of the signal (DCS) recorded at the terminals of the piezoelectric transducer. The DCS model uses maximum likelihood technique to estimate the different parameters, namely the time delay of the signal due to the transducer delay and amplitude scaling of all the lamb wave modes due to attenuation, while taking into account the received signal spreading in time due to dispersion. The maximum likelihood estimate minimizes the energy difference between the experimental and the DCS model-generated signal. We demonstrate that the DCS model matches closely with experimentally recorded signals and show it can be used to estimate thickness of the plate. The main idea of the thickness estimation algorithm is to generate a bank of DCS model-generated signals, each corresponding to a different thickness of the plate and then find the closest match among these signals to the received signal, resulting in an estimate of the thickness of the plate. Therefore our approach provides a complementary suite of analytics to the existing thickness monitoring approaches.

  6. Thermal modeling of the lithium/polymer battery

    NASA Astrophysics Data System (ADS)

    Pals, C. R.

    1994-10-01

    Research in the area of advanced batteries for electric-vehicle applications has increased steadily since the 1990 zero-emission-vehicle mandate of the California Air Resources Board. Due to their design flexibility and potentially high energy and power densities, lithium/polymer batteries are an emerging technology for electric-vehicle applications. Thermal modeling of lithium/polymer batteries is particularly important because the transport properties of the system depend exponentially on temperature. Two models have been presented for assessment of the thermal behavior of lithium/polymer batteries. The one-cell model predicts the cell potential, the concentration profiles, and the heat-generation rate during discharge. The cell-stack model predicts temperature profiles and heat transfer limitations of the battery. Due to the variation of ionic conductivity and salt diffusion coefficient with temperature, the performance of the lithium/polymer battery is greatly affected by temperature. Because of this variation, it is important to optimize the cell operating temperature and design a thermal management system for the battery. Since the thermal conductivity of the polymer electrolyte is very low, heat is not easily conducted in the direction perpendicular to cell layers. Temperature profiles in the cells are not as significant as expected because heat-generation rates in warmer areas of the cell stack are lower than heat-generation rates in cooler areas of the stack. This nonuniform heat-generation rate flattens the temperature profile. Temperature profiles as calculated by this model are not as steep as those calculated by previous models that assume a uniform heat-generation rate.

  7. A strategy for the generation, characterization and distribution of animal models by The Michael J. Fox Foundation for Parkinson's Research.

    PubMed

    Baptista, Marco A S; Dave, Kuldip D; Sheth, Niketa P; De Silva, Shehan N; Carlson, Kirsten M; Aziz, Yasmin N; Fiske, Brian K; Sherer, Todd B; Frasier, Mark A

    2013-11-01

    Progress in Parkinson's disease (PD) research and therapeutic development is hindered by many challenges, including a need for robust preclinical animal models. Limited availability of these tools is due to technical hurdles, patent issues, licensing restrictions and the high costs associated with generating and distributing these animal models. Furthermore, the lack of standardization of phenotypic characterization and use of varying methodologies has made it difficult to compare outcome measures across laboratories. In response, The Michael J. Fox Foundation for Parkinson's Research (MJFF) is directly sponsoring the generation, characterization and distribution of preclinical rodent models, enabling increased access to these crucial tools in order to accelerate PD research. To date, MJFF has initiated and funded the generation of 30 different models, which include transgenic or knockout models of PD-relevant genes such as Park1 (also known as Park4 and SNCA), Park8 (LRRK2), Park7 (DJ-1), Park6 (PINK1), Park2 (Parkin), VPS35, EiF4G1 and GBA. The phenotypic characterization of these animals is performed in a uniform and streamlined manner at independent contract research organizations. Finally, MJFF created a central repository at The Jackson Laboratory (JAX) that houses both non-MJFF and MJFF-generated preclinical animal models. Funding from MJFF, which subsidizes the costs involved in transfer, rederivation and colony expansion, has directly resulted in over 2500 rodents being distributed to the PD community for research use.

  8. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    PubMed

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  9. The stability of a class of synchronous generator damping model

    NASA Astrophysics Data System (ADS)

    Liu, Jun

    2018-03-01

    Electricity is indispensable to modern society and the most convenient energy, it can be easily transformed into other forms of energy, has been widely used in engineering, transportation and so on, this paper studied the generator model with damping machine, using the Lyapunov function method, we obtain sufficient conditions for the asymptotic stability of the model.

  10. The Knowledge Building Paradigm: A Model of Learning for Net Generation Students

    ERIC Educational Resources Information Center

    Philip, Donald

    2005-01-01

    In this article Donald Philip describes Knowledge Building, a pedagogy based on the way research organizations function. The global economy, Philip argues, is driving a shift from older, industrial models to the model of the business as a learning organization. The cognitive patterns of today's Net Generation students, formed by lifetime exposure…

  11. Accuracy of DSM based on digital aerial image matching. (Polish Title: Dokładność NMPT tworzonego metodą automatycznego dopasowania cyfrowych zdjęć lotniczych)

    NASA Astrophysics Data System (ADS)

    Kubalska, J. L.; Preuss, R.

    2013-12-01

    Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.

  12. JIGSAW-GEO (1.0): Locally Orthogonal Staggered Unstructured Grid Generation for General Circulation Modelling on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2017-01-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  13. JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren

    2017-06-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  14. Adversarial Threshold Neural Computer for Molecular de Novo Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Vanhaelen, Quentin; Ivanenkov, Yan; Aladinskaya, Anastasia V; Aliper, Alex; Zhavoronkov, Alex

    2018-03-30

    In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp 3 -rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds.

  15. An open-access CMIP5 pattern library for temperature and precipitation: description and methodology

    NASA Astrophysics Data System (ADS)

    Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben

    2017-05-01

    Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.

  16. Differences in contraceptive use across generations of migration among women of Mexican origin.

    PubMed

    Wilson, Ellen K

    2009-09-01

    To explore differences in contraceptive use among women of Mexican origin across generations of migration. Logit models were used to assess contraceptive use among 1,830 women of Mexican origin in Cycles 5 (1995) and 6 (2002) of the National Survey of Family Growth (NSFG). Analyses were stratified by age. Initial models controlled for survey year and underlying differences across generations of migration in age and parity; subsequent models added a range of potential mediating variables. Models account for significant interactions between generation of migration and parity. Among women under age 30 who have not yet had any children, women in their twenties with parity 3 or more, and women 30 or older with parity 1 or 2, those born in the US are much more likely to use contraception than immigrant women. For other levels of parity, there are no significant differences in contraceptive use across generations of migration. Generational differences in marital status, socio-economic status, health insurance coverage, and catholic religiosity did little to mediate the association between generation of migration and contraceptive use. Among women of Mexican origin, patterns of contraceptive use among first-generation immigrants and women of generation 1.5 are similar to those of women in Mexico, with very low rates of contraceptive use among young women who have not yet had a child. Further research is needed to investigate the extent to which this pattern is due to fertility preferences, contraceptive access, or concerns about side effects and infertility. Patterns of contraceptive use appear to change more slowly with acculturation than many other factors, such as education, income, and work force participation.

  17. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Application of SDSM and LARS-WG for simulating and downscaling of rainfall and temperature

    NASA Astrophysics Data System (ADS)

    Hassan, Zulkarnain; Shamsudin, Supiah; Harun, Sobri

    2014-04-01

    Climate change is believed to have significant impacts on the water basin and region, such as in a runoff and hydrological system. However, impact studies on the water basin and region are difficult, since general circulation models (GCMs), which are widely used to simulate future climate scenarios, do not provide reliable hours of daily series rainfall and temperature for hydrological modeling. There is a technique named as "downscaling techniques", which can derive reliable hour of daily series rainfall and temperature due to climate scenarios from the GCMs output. In this study, statistical downscaling models are used to generate the possible future values of local meteorological variables such as rainfall and temperature in the selected stations in Peninsular of Malaysia. The models are: (1) statistical downscaling model (SDSM) that utilized the regression models and stochastic weather generators and (2) Long Ashton research station weather generator (LARS-WG) that only utilized the stochastic weather generators. The LARS-WG and SDSM models obviously are feasible methods to be used as tools in quantifying effects of climate change condition in a local scale. SDSM yields a better performance compared to LARS-WG, except SDSM is slightly underestimated for the wet and dry spell lengths. Although both models do not provide identical results, the time series generated by both methods indicate a general increasing trend in the mean daily temperature values. Meanwhile, the trend of the daily rainfall is not similar to each other, with SDSM giving a relatively higher change of annual rainfall compared to LARS-WG.

  19. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less

  20. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry.

    PubMed

    Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-01

    The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

Top