Sample records for architecture flow unit

  1. Border information flow architecture

    DOT National Transportation Integrated Search

    2006-04-01

    This brochure describes the Border Information Flow Architecture (BIFA). The Transportation Border Working Group, a bi-national group that works to enhance coordination and planning between the United States and Canada, identified collaboration on th...

  2. Mexican Influence on Contemporary Art and Architecture of the United States: A Model Lesson for Cross Cultural Understanding at the Secondary Level.

    ERIC Educational Resources Information Center

    Finer, Neal

    In this model lesson, secondary students test the hypothesis that Mexican achievements have widely influenced art and architecture in the United States as a result of the cultural flow and exchange between the two nations. The lesson is designed to be presented in two to three class periods. To determine the validity of the hypothesis, students…

  3. Complex Processes from Dynamical Architectures with Time-Scale Hierarchy

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor

    2011-01-01

    The idea that complex motor, perceptual, and cognitive behaviors are composed of smaller units, which are somehow brought into a meaningful relation, permeates the biological and life sciences. However, no principled framework defining the constituent elementary processes has been developed to this date. Consequently, functional configurations (or architectures) relating elementary processes and external influences are mostly piecemeal formulations suitable to particular instances only. Here, we develop a general dynamical framework for distinct functional architectures characterized by the time-scale separation of their constituents and evaluate their efficiency. Thereto, we build on the (phase) flow of a system, which prescribes the temporal evolution of its state variables. The phase flow topology allows for the unambiguous classification of qualitatively distinct processes, which we consider to represent the functional units or modes within the dynamical architecture. Using the example of a composite movement we illustrate how different architectures can be characterized by their degree of time scale separation between the internal elements of the architecture (i.e. the functional modes) and external interventions. We reveal a tradeoff of the interactions between internal and external influences, which offers a theoretical justification for the efficient composition of complex processes out of non-trivial elementary processes or functional modes. PMID:21347363

  4. Performance study of a data flow architecture

    NASA Technical Reports Server (NTRS)

    Adams, George

    1985-01-01

    Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.

  5. Simulating the heterogeneity in braided channel belt deposits: 2. Examples of results and comparison to natural deposits

    NASA Astrophysics Data System (ADS)

    Guin, Arijit; Ramanathan, Ramya; Ritzi, Robert W.; Dominic, David F.; Lunt, Ian A.; Scheibe, Timothy D.; Freedman, Vicky L.

    2010-04-01

    In part 1 of this paper (Ramanathan et al., 2010b) we presented a methodology and a code for modeling the hierarchical sedimentary architecture in braided channel belt deposits. Here in part 2, the code was used to create a digital model of this architecture and the corresponding spatial distribution of permeability. The simulated architecture was compared to the real stratal architecture observed in an abandoned channel belt. The comparisons included assessments of similarity which were both qualitative and quantitative. The qualitative comparisons show that the geometries of unit types within the synthetic deposits are generally consistent with field observations. The unit types in the synthetic deposits would generally be recognized as representing their counterparts in nature, including cross stratasets, lobate and scroll bar deposits, and channel fills. Furthermore, the synthetic deposits have a hierarchical spatial relationship among these units consistent with observations from field exposures and geophysical images. In quantitative comparisons the proportions and the length, width, and height of unit types at different scales, across all levels of the stratal hierarchy, compare well between the synthetic and the natural deposits. A number of important attributes of the synthetic channel belt deposits are shown to be influenced by more than one level within the hierarchy of stratal architecture. First, the high-permeability open-framework gravels connected across all levels and thus formed preferential flow pathways; open-framework gravels are known to form preferential flow pathways in natural channel belt deposits. The nature of a connected cluster changed across different levels of the stratal hierarchy, and as a result of the geologic structure, the connectivity occurs at proportions of open-framework gravels below the theoretical percolation threshold for random infinite media. Second, when the channel belt model was populated with permeability distributions by lowest-level unit type, the composite permeability semivariogram contained structures that were identifiable at more than one scale, and each of these structures could be directly linked to unit types of different scales existing at different levels within the hierarchy of strata. These collective results are encouraging with respect to our goal that this model be relevant for testing ideas in future research on flow and transport in aquifers and reservoirs with multiscale heterogeneity.

  6. Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG

    PubMed Central

    Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.

    2015-01-01

    We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573

  7. A microfluidic fuel cell with flow-through porous electrodes.

    PubMed

    Kjeang, Erik; Michel, Raphaelle; Harrington, David A; Djilali, Ned; Sinton, David

    2008-03-26

    A microfluidic fuel cell architecture incorporating flow-through porous electrodes is demonstrated. The design is based on cross-flow of aqueous vanadium redox species through the electrodes into an orthogonally arranged co-laminar exit channel, where the waste solutions provide ionic charge transfer in a membraneless configuration. This flow-through architecture enables improved utilization of the three-dimensional active area inside the porous electrodes and provides enhanced rates of convective/diffusive transport without increasing the parasitic loss required to drive the flow. Prototype fuel cells are fabricated by rapid prototyping with total material cost estimated at 2 USD/unit. Improved performance as compared to previous microfluidic fuel cells is demonstrated, including power densities at room temperature up to 131 mW cm-2. In addition, high overall energy conversion efficiency is obtained through a combination of relatively high levels of fuel utilization and cell voltage. When operated at 1 microL min-1 flow rate, the fuel cell produced 20 mW cm-2 at 0.8 V combined with an active fuel utilization of 94%. Finally, we demonstrate in situ fuel and oxidant regeneration by running the flow-through architecture fuel cell in reverse.

  8. Residence times and alluvial architecture of a sediment superslug in response to different flow regimes

    NASA Astrophysics Data System (ADS)

    Moody, John A.

    2017-10-01

    A superslug was deposited in a basin in the Colorado Front Range Mountains as a consequence of an extreme flood following a wildfire disturbance in 1996. The subsequent evolution of this superslug was measured by repeat topographic surveys (31 surveys from 1996 through 2014) of 18 cross sections approximately uniformly spaced over 1500 m immediately above the basin outlet. These surveys allowed the identification within the superslug of chronostratigraphic units deposited and eroded by different geomorphic processes in response to different flow regimes. Over the time period of the study, the superslug went through aggradation, incision, and stabilization phases that were controlled by a shift in geomorphic processes from generally short-duration, episodic, large-magnitude floods that deposited new chronostratigraphic units to long-duration processes that eroded units. These phases were not contemporaneous at each channel cross section, which resulted in a complex response that preserved different chronostratigraphic units at each channel cross section having, in general, two dominant types of alluvial architecture-laminar and fragmented. Age and transit-time distributions for these two alluvial architectures evolved with time since the extreme flood. Because of the complex shape of the distributions they were best modeled by two-parameter Weibull functions. The Weibull scale parameter approximated the median age of the distributions, and the Weibull shape parameter generally had a linear relation that increased with time since the extreme flood. Additional results indicated that deposition of new chronostratigraphic units can be represented by a power-law frequency distribution, and that the erosion of units decreases with depth of burial to a limiting depth. These relations can be used to model other situations with different flow regimes where vertical aggradation and incision are dominant processes, to predict the residence time of possible contaminated sediment stored in channels or on floodplains, and to provide insight into the interpretation of recent or ancient fluvial deposits.

  9. Simulating the Heterogeneity in Braided Channel Belt Deposits: 2. Examples of Results and Comparison to Natural Deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guin, Arijit; Ramanathan, Ramya; Ritzi, Robert W.

    In Part 1 of this series we presented a methodology and a code for modeling the hierarchical sedimentary architecture in braided channel belt deposits. Here, in Part 2, the code was used to create a digital model of this architecture, and the corresponding spatial distribution of permeability. The simulated architecture was compared to the real stratal architecture observed in an abandoned channel belt of the Sagavanirktok River, Alaska by Lunt et al. (2004). The comparisons included assessments of similarity which were both qualitative and quantitative. From the qualitative comparisons we conclude that a synthetic deposit created by the code hasmore » unit types, at each level, with a geometry which is generally consistent with the geometry of unit types observed in the field. The digital unit types would generally be recognized as representing their counterparts in nature, including cross stratasets, lobate and scroll bar deposits, channel fills, etc. Furthermore, the synthetic deposit has a hierarchical spatial relationship among these units which represents how the unit types are observed in field exposures and in geophysical images. In quantitative comparisons the proportions and the length, width, and height of unit types at different scales, across all levels of the stratal hierarchy compare well between the digital and the natural deposits. A number of important attributes of the channel belt model were shown to be influenced by more than one level within the hierarchy of stratal architecture. First, the high-permeability open-framework gravels percolated at all levels and thus formed preferential flow pathways. Open framework gravels are indeed known to form preferential flow pathways in natural channel belt deposits. The nature of a percolating cluster changed across different levels of the hierarchy of stratal architecture. As a result of this geologic structure, the percolation occurs at proportions of open-framework gravels below the theoretical percolation threshold for random infinite media. Second, when the channel belt model was populated with permeability distributions by lowest-level unit type, the composite permeability semivariogram contained structures that were identifiable at more than one scale, and each of these structures could be directly linked to unit types of different scales existing at different levels within the hierarchy of strata. These collective results are encouraging with respect to our goal that this model be relevant as a base case in future studies for testing ideas in research addressing the upscaling problem in aquifers and reservoirs with multi-scale heterogeneity.« less

  10. Geomorphology and till architecture of terrestrial palaeo-ice streams of the southwest Laurentide Ice Sheet: A borehole stratigraphic approach

    NASA Astrophysics Data System (ADS)

    Norris, Sophie L.; Evans, David J. A.; Cofaigh, Colm Ó.

    2018-04-01

    A multidimensional study, utilising geomorphological mapping and the analysis of regional borehole stratigraphy, is employed to elucidate the regional till architecture of terrestrial palaeo-ice streams relating to the Late Wisconsinan southwest Laurentide Ice Sheet. Detailed mapping over a 57,400 km2 area of southwestern Saskatchewan confirms previous reconstructions of a former southerly flowing ice stream, demarcated by a 800 km long corridor of megaflutes and mega-scale glacial lineations (Ice Stream 1) and cross cut by three, formerly southeast flowing ice streams (Ice Streams 2A, B and C). Analysis of the lithologic and geophysical characteristics of 197 borehole samples within these corridors reveals 17 stratigraphic units comprising multiple tills and associated stratified sediments overlying preglacial deposits, the till thicknesses varying with both topography and distance down corridor. Reconciling this regional till architecture with the surficial geomorphology reveals that surficial units are spatially consistent with a dynamic switch in flow direction, recorded by the cross cutting corridors of Ice Streams 1, 2A, B and C. The general thickening of tills towards lobate ice stream margins is consistent with subglacial deformation theory and variations in this pattern on a more localised scale are attributed to influences of subglacial topography including thickening at buried valley margins, thinning over uplands and thickening in overridden ice-marginal landforms.

  11. Architecture and emplacement of flood basalt flow fields: case studies from the Columbia River Basalt Group, NW USA

    NASA Astrophysics Data System (ADS)

    Vye-Brown, C.; Self, S.; Barry, T. L.

    2013-03-01

    The physical features and morphologies of collections of lava bodies emplaced during single eruptions (known as flow fields) can be used to understand flood basalt emplacement mechanisms. Characteristics and internal features of lava lobes and whole flow field morphologies result from the forward propagation, radial spread, and cooling of individual lobes and are used as a tool to understand the architecture of extensive flood basalt lavas. The features of three flood basalt flow fields from the Columbia River Basalt Group are presented, including the Palouse Falls flow field, a small (8,890 km2, ˜190 km3) unit by common flood basalt proportions, and visualized in three dimensions. The architecture of the Palouse Falls flow field is compared to the complex Ginkgo and more extensive Sand Hollow flow fields to investigate the degree to which simple emplacement models represent the style, as well as the spatial and temporal developments, of flow fields. Evidence from each flow field supports emplacement by inflation as the predominant mechanism producing thick lobes. Inflation enables existing lobes to transmit lava to form new lobes, thus extending the advance and spread of lava flow fields. Minimum emplacement timescales calculated for each flow field are 19.3 years for Palouse Falls, 8.3 years for Ginkgo, and 16.9 years for Sand Hollow. Simple flow fields can be traced from vent to distal areas and an emplacement sequence visualized, but those with multiple-layered lobes present a degree of complexity that make lava pathways and emplacement sequences more difficult to identify.

  12. Multi-scale evaporator architectures for geothermal binary power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Nejad, Ali; Klett, James William

    2016-01-01

    In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a weremore » used to obtain performance indicators.« less

  13. A proposal for an SDN-based SIEPON architecture

    NASA Astrophysics Data System (ADS)

    Khalili, Hamzeh; Sallent, Sebastià; Piney, José Ramón; Rincón, David

    2017-11-01

    Passive Optical Network (PON) elements such as Optical Line Terminal (OLT) and Optical Network Units (ONUs) are currently managed by inflexible legacy network management systems. Software-Defined Networking (SDN) is a new networking paradigm that improves the operation and management of networks. In this paper, we propose a novel architecture, based on the SDN concept, for Ethernet Passive Optical Networks (EPON) that includes the Service Interoperability standard (SIEPON). In our proposal, the OLT is partially virtualized and some of its functionalities are allocated to the core network management system, while the OLT itself is replaced by an OpenFlow (OF) switch. A new MultiPoint MAC Control (MPMC) sublayer extension based on the OpenFlow protocol is presented. This would allow the SDN controller to manage and enhance the resource utilization, flow monitoring, bandwidth assignment, quality-of-service (QoS) guarantees, and energy management of the optical network access, to name a few possibilities. The OpenFlow switch is extended with synchronous ports to retain the time-critical nature of the EPON network. OpenFlow messages are also extended with new functionalities to implement the concept of EPON Service Paths (ESPs). Our simulation-based results demonstrate the effectiveness of the new architecture, while retaining a similar (or improved) performance in terms of delay and throughput when compared to legacy PONs.

  14. Counterflow heat exchanger with core and plenums at both ends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bejan, A.; Alalaimi, M.; Lorente, S.

    2016-04-22

    Here, this paper illustrates the morphing of flow architecture toward greater performance in a counterflow heat exchanger. The architecture consists of two plenums with a core of counterflow channels between them. Each stream enters one plenum and then flows in a channel that travels the core and crosses the second plenum. The volume of the heat exchanger is fixed while the volume fraction occupied by each plenum is variable. Performance is driven by two objectives, simultaneously: low flow resistance and low thermal resistance. The analytical and numerical results show that the overall flow resistance is the lowest when the coremore » is absent, and each plenum occupies half of the available volume and is oriented in counterflow with the other plenum. In this configuration, the thermal resistance also reaches its lowest value. These conclusions hold for fully developed laminar flow and turbulent flow through the core. The curve for effectiveness vs number of heat transfer units (N tu) is steeper (when N tu < 1) than the classical curves for counterflow and crossflow.« less

  15. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  16. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  17. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    PubMed Central

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  18. Efficient hardware implementation of the Horn-Schunck algorithm for high-resolution real-time dense optical flow sensor.

    PubMed

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-02-12

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.

  19. ITS physical architecture.

    DOT National Transportation Integrated Search

    2002-04-01

    The Physical Architecture identifies the physical subsystems and, architecture flows between subsystems that will implement the processes and support the data flows of the ITS Logical Architecture. The Physical Architecture further identifies the sys...

  20. Internal architecture, permeability structure, and hydrologic significance of contrasting fault-zone types

    NASA Astrophysics Data System (ADS)

    Rawling, Geoffrey C.; Goodwin, Laurel B.; Wilson, John L.

    2001-01-01

    The Sand Hill fault is a steeply dipping, large-displacement normal fault that cuts poorly lithified Tertiary sediments of the Albuquerque basin, New Mexico, United States. The fault zone does not contain macroscopic fractures; the basic structural element is the deformation band. The fault core is composed of foliated clay flanked by structurally and lithologically heterogeneous mixed zones, in turn flanked by damage zones. Structures present within these fault-zone architectural elements are different from those in brittle faults formed in lithified sedimentary and crystalline rocks that do contain fractures. These differences are reflected in the permeability structure of the Sand Hill fault. Equivalent permeability calculations indicate that large-displacement faults in poorly lithified sediments have little potential to act as vertical-flow conduits and have a much greater effect on horizontal flow than faults with fractures.

  1. Residence times and alluvial architecture of a sediment superslug in response to different flow regimes

    USGS Publications Warehouse

    Moody, John A.

    2017-01-01

    A superslug was deposited in a basin in the Colorado Front Range Mountains as a consequence of an extreme flood following a wildfire disturbance in 1996. The subsequent evolution of this superslug was measured by repeat topographic surveys (31 surveys from 1996 through 2014) of 18 cross sections approximately uniformly spaced over 1500 m immediately above the basin outlet. These surveys allowed the identification within the superslug of chronostratigraphic units deposited and eroded by different geomorphic processes in response to different flow regimes.Over the time period of the study, the superslug went through aggradation, incision, and stabilization phases that were controlled by a shift in geomorphic processes from generally short-duration, episodic, large-magnitude floods that deposited new chronostratigraphic units to long-duration processes that eroded units. These phases were not contemporaneous at each channel cross section, which resulted in a complex response that preserved different chronostratigraphic units at each channel cross section having, in general, two dominant types of alluvial architecture—laminar and fragmented. Age and transit-time distributions for these two alluvial architectures evolved with time since the extreme flood. Because of the complex shape of the distributions they were best modeled by two-parameter Weibull functions. The Weibull scale parameter approximated the median age of the distributions, and the Weibull shape parameter generally had a linear relation that increased with time since the extreme flood. Additional results indicated that deposition of new chronostratigraphic units can be represented by a power-law frequency distribution, and that the erosion of units decreases with depth of burial to a limiting depth. These relations can be used to model other situations with different flow regimes where vertical aggradation and incision are dominant processes, to predict the residence time of possible contaminated sediment stored in channels or on floodplains, and to provide insight into the interpretation of recent or ancient fluvial deposits.

  2. Real-time blood flow visualization using the graphics processing unit

    NASA Astrophysics Data System (ADS)

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.

  3. Real-time blood flow visualization using the graphics processing unit

    PubMed Central

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915

  4. Analysis of Employment Flow of Landscape Architecture Graduates in Agricultural Universities

    ERIC Educational Resources Information Center

    Yao, Xia; He, Linchun

    2012-01-01

    A statistical analysis of employment flow of landscape architecture graduates was conducted on the employment data of graduates major in landscape architecture in 2008 to 2011. The employment flow of graduates was to be admitted to graduate students, industrial direction and regional distribution, etc. Then, the features of talent flow and factors…

  5. Launch Vehicle Control Center Architectures

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom

    2014-01-01

    This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.

  6. SAR processing on the MPP

    NASA Technical Reports Server (NTRS)

    Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.

    1981-01-01

    The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.

  7. Microchannel Distillation of JP-8 Jet Fuel for Sulfur Content Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Feng; Stenkamp, Victoria S.; TeGrotenhuis, Ward E.

    2006-09-16

    In microchannel based distillation processes, thin vapor and liquid films are contacted in small channels where mass transfer is diffusion-limited. The microchannel architecture enables improvements in distillation processes. A shorter height equivalent of a theoretical plate (HETP) and therefore a more compact distillation unit can be achieved. A microchannel distillation unit was used to produce a light fraction of JP-8 fuel with reduced sulfur content for use as feed to produce fuel-cell grade hydrogen. The HETP of the microchannel unit is discussed, as well as the effects of process conditions such as feed temperature, flow rate, and reflux ratio.

  8. Fault and fracture patterns in low porosity chalk and their potential influence on sub-surface fluid flow-A case study from Flamborough Head, UK

    NASA Astrophysics Data System (ADS)

    Sagi, D. A.; De Paola, N.; McCaffrey, K. J. W.; Holdsworth, R. E.

    2016-10-01

    To better understand fault zone architecture and fluid flow in mesoscale fault zones, we studied normal faults in chalks with displacements up to 20 m, at two representative localities in Flamborough Head (UK). At the first locality, chalk contains cm-thick, interlayered marl horizons, whereas at the second locality marl horizons were largely absent. Cm-scale displacement faults at both localities display ramp-flat geometries. Mesoscale fault patterns in the marl-free chalk, including a larger displacement fault (20 m) containing multiple fault strands, show widespread evidence of hydraulically-brecciated rocks, whereas clays smears along fault planes, and injected into open fractures, and a simpler fault zone architecture is observed where marl horizons are present. Hydraulic brecciation and veins observed in the marl-free chalk units suggest that mesoscale fault patterns acted as localized fault conduit allowing for widespread fluid flow. On the other hand, mesoscale fault patterns developed in highly fractured chalk, which contains interlayered marl horizons can act as localized barriers to fluid flow, due to the sealing effect of clays smears along fault planes and introduced into open fractures in the damage zone. To support our field observations, quantitative analyses carried out on the large faults suggest a simple fault zone in the chalk with marl units with fracture density/connectivity decreasing towards the protolith. Where marls are absent, density is high throughout the fault zone, while connectivity is high only in domains nearest the fault core. We suggest that fluid flow in fractured chalk is especially influenced by the presence of marls. When present, it can smear onto fault planes, forming localised barriers. Fluid flow along relatively large displacement faults is additionally controlled by the complexity of the fault zone, especially the size/geometry of weakly and intensely connected damage zone domains.

  9. Efficient parallel architecture for highly coupled real-time linear system applications

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo

    1988-01-01

    A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.

  10. Reservoir compartmentalization of deep-water Intra Qua Iboe sand (Pliocene), Edop field, offshore Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermance, W.E.; Olaifa, J.O.; Shanmugam, G.

    An integration of 3-D seismic and sedimentological information provides a basis for recognizing and mapping individual flow units within the Intra Qua Iboe (IQI) reservoir (Pliocene), Edop Field, offshore Nigeria. Core examination show the following depositional facies: A-Sandy slump/mass flow, B-Muddy slump/mass flow, C. Bottom current reworking. D-Non-channelized turbidity currents, E. Channelized (coalesced) turbidity currents. F-Channelized (isolated) turbidity currents, G-Pelagic/hemipelagic, H-Levee, I-Reworked slope, J-Wave dominated, and K-Tide dominated facies. With the exception of facies J and K, all these facies are of deep-water affinity. The IQI was deposited on an upper slope environment in close proximity to the shelf edge.more » Through time, as the shelf edge migrated scaward, deposition began with a channel dominated deep-water system (IQI 1 and 2) and progressed through a slump/debris flow dominated deep-water system (IQI 3, the principle reservoir) to a tide and wave dominated shallow-water system (IQI 4). Compositional and textural similarities between the deep-water facies result in similar log motifs. Furthermore, these depositional facies are not readily apparent as distinct seismic facies. Deep-water facies A, D, E, and F are reservoir facies, whereas facies B, C, G, H, and I are non-reservoir facies. However, Facies G is useful as a seismically mappable event throughout the study area. Mapping of these non-reservoir events provides the framework for understanding gross reservoir architecture. This study has resulted in seven defined reservoir units within the IQI, which serves as the architectural framework for ongoing reservoir characterization.« less

  11. Life and evolution as physics

    PubMed Central

    Bejan, Adrian

    2016-01-01

    ABSTRACT What is evolution and why does it exist in the biological, geophysical and technological realms — in short, everywhere? Why is there a time direction — a time arrow — in the changes we know are happening every moment and everywhere? Why is the present different than the past? These are questions of physics, about everything, not just biology. The answer is that nothing lives, flows and moves unless it is driven by power. Physics sheds light on the natural engines that produce the power destroyed by the flows, and on the free morphing that leads to flow architectures naturally and universally. There is a unifying tendency across all domains to evolve into flow configurations that provide greater access for movement. This tendency is expressed as the constructal law of evolutionary flow organization everywhere. Here I illustrate how this law of physics accounts for and unites the life and evolution phenomena throughout nature, animate and inanimate. PMID:27489579

  12. Non-invasive measurement of pulse wave velocity using transputer-based analysis of Doppler flow audio signals.

    PubMed

    Stewart, W R; Ramsey, M W; Jones, C J

    1994-08-01

    A system for the measurement of arterial pulse wave velocity is described. A personal computer (PC) plug-in transputer board is used to process the audio signals from two pocket Doppler ultrasound units. The transputer is used to provide a set of bandpass digital filters on two channels. The times of excursion of power through thresholds in each filter are recorded and used to estimate the onset of systolic flow. The system does not require an additional spectrum analyser and can work in real time. The transputer architecture provides for easy integration into any wider physiological measurement system.

  13. Launch Vehicle Control Center Architectures

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom

    2014-01-01

    Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.

  14. Coordinated traffic incident management using the I-Net embedded sensor architecture

    NASA Astrophysics Data System (ADS)

    Dudziak, Martin J.

    1999-01-01

    The I-Net intelligent embedded sensor architecture enables the reconfigurable construction of wide-area remote sensing and data collection networks employing diverse processing and data acquisition modules communicating over thin- server/thin-client protocols. Adaptive initially for operation using mobile remotely-piloted vehicle platforms such as small helicopter robots such as the Hornet and Ascend-I, the I-Net architecture lends itself to a critical problem in the management of both spontaneous and planned traffic congestion and rerouting over major interstate thoroughfares such as the I-95 Corridor. Pre-programmed flight plans and ad hoc operator-assisted navigation of the lightweight helicopter, using an auto-pilot and gyroscopic stabilization augmentation units, allows daytime or nighttime over-the-horizon flights of the unit to collect and transmit real-time video imagery that may be stored or transmitted to other locations. With on-board GPS and ground-based pattern recognition capabilities to augment the standard video collection process, this approach enables traffic management and emergency response teams to plan and assist real-time in the adjustment of traffic flows in high- density or congested areas or during dangerous road conditions such as during ice, snow, and hurricane storms. The I-Net architecture allows for integration of land-based and roadside sensors within a comprehensive automated traffic management system with communications to and form an airborne or other platform to devices in the network other than human-operated desktop computers, thereby allowing more rapid assimilation and response for critical data. Experiments have been conducted using several modified platforms and standard video and still photographic equipment. Current research and development is focused upon modification of the modular instrumentation units in order to accommodate faster loading and reloading of equipment onto the RPV, extension of the I-Net architecture to enable RPV-to-RPV signaling and control, and refinement of safety and emergency mechanisms to handle RPV mechanical failure during flight.

  15. Sedimentology and architecture of De Geer moraines in the western Scottish Highlands, and implications for grounding-line glacier dynamics

    NASA Astrophysics Data System (ADS)

    Golledge, Nicholas R.; Phillips, Emrys

    2008-07-01

    Sedimentary exposures in moraines in a Scottish Highland valley (Glen Chaorach), reveal stacked sequences of bedded and laminated silt, sand and gravel, interspersed or capped with diamicton units. In four examples, faults and folds indicate deformation by glaciotectonism and syndepositional loading. We propose that these sediments were laid down in an ice-dammed lake, close to the last ice margin to occupy this glen. Individual units within cross-valley De Geer moraine ridges are interpreted by comparison with examples from similar environments elsewhere: stratified diamictons containing laminated or bedded lenses are interpreted as subaqueous ice-marginal debris-flow deposits; massive fine-grained deposits as hyperconcentrated flow deposits, and massive gravel units as high-density debris-flow deposits. Using an allostratigraphic approach we argue that glaciotectonically deformed coarsening-upward sand and gravel sequences that culminate in deposition of subglacial diamicton represent glacier advances into the ice-marginal lake, whereas undisturbed cross-bedded sand and gravel reflects channel or fan deposits laid down during glacier retreat. A flat terrace of bedded sand and gravel at the northern end of Glen Chaorach is interpreted as subaerial glaciofluvial outwash. On the basis of these inferences we propose the following three stage deglacial event chronology for Glen Chaorach. During glacier recession, ice separation and intra-lobe ponding first led to subaquaeous deposition of sorted and unsorted facies. Subsequent glacier stabilisation and ice-marginal oscillation produced glaciotectonic structures in the ice-marginal sediment pile and formed De Geer moraines. Finally, drainage of the ice-dammed lake allowed a subaerial ice-marginal drainage system to become established. Throughout deglaciation, deposition within the lake was characterized by abrupt changes in grain size and in the architecture of individual sediment bodies, reflecting changing delivery paths and sediment supply, and by dynamic margin oscillations typical of water-terminating glaciers.

  16. AMICAL: An aid for architectural synthesis and exploration of control circuits

    NASA Astrophysics Data System (ADS)

    Park, Inhag

    AMICAL is an architectural synthesis system for control flow dominated circuits. A behavioral finite state machine specification, where the scheduling and register allocation were performed, is presented. An abstract architecture specification that may feed existing silicon compilers acting at the logic and register transfer levels is described. AMICAL consists of five main functions allowing automatic, interactive and manual synthesis, as well as the combination of these methods. These functions are a synthesizer, a graphics editor, a verifier, an evaluator, and a documentor. Automatic synthesis is achieved by algorithms that allocate both functional units, stored in an expandable user defined library, and connections. AMICAL also allows the designer to interrupt the synthesis process at any stage and make interactive modifications via a specially designed graphics editor. The user's modifications are verified and evaluated to ensure that no design rules are broken and that any imposed constraints are still met. A documentor provides the designer with status and feedback reports from the synthesis process.

  17. Programming a hillslope water movement model on the MPP

    NASA Technical Reports Server (NTRS)

    Devaney, J. E.; Irving, A. R.; Camillo, P. J.; Gurney, R. J.

    1987-01-01

    A physically based numerical model was developed of heat and moisture flow within a hillslope on a parallel architecture computer, as a precursor to a model of a complete catchment. Moisture flow within a catchment includes evaporation, overland flow, flow in unsaturated soil, and flow in saturated soil. Because of the empirical evidence that moisture flow in unsaturated soil is mainly in the vertical direction, flow in the unsaturated zone can be modeled as a series of one dimensional columns. This initial version of the hillslope model includes evaporation and a single column of one dimensional unsaturated zone flow. This case has already been solved on an IBM 3081 computer and is now being applied to the massively parallel processor architecture so as to make the extension to the one dimensional case easier and to check the problems and benefits of using a parallel architecture machine.

  18. Cluster: Drafting. Course: Architectural Drafting. Research Project.

    ERIC Educational Resources Information Center

    Sanford - Lee County Schools, NC.

    The sequence of 10 units is designed for use with an instructor in architectural drafting, and is also keyed to other texts. Each unit contains several task packages specifying prerequisites, rationale for learning, objectives, learning activities to be supervised by the instructor, and learning practice. The units cover: architectural lettering…

  19. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  20. Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor K.

    2011-01-01

    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time). PMID:21980278

  1. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    DTIC Science & Technology

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  2. ITS component specification. Appendix B, Input data flows for components

    DOT National Transportation Integrated Search

    1997-11-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix defines the input data flows for each component of the Polaris Physical Architecture.

  3. ITS component specification. Appendix C, Output data flows for components

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix defines the output data flows for each component of the Polaris Physical Architecture.

  4. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  5. An approach to developing an integrated pyroprocessing simulator

    NASA Astrophysics Data System (ADS)

    Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol; Kim, Sung Ki; Kim, In Tae; Lee, Han Soo

    2014-02-01

    Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggested a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.

  6. An approach to developing an integrated pyroprocessing simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol

    Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggestedmore » a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.« less

  7. Influence of architecture and material properties on vanadium redox flow battery performance

    NASA Astrophysics Data System (ADS)

    Houser, Jacob; Clement, Jason; Pezeshki, Alan; Mench, Matthew M.

    2016-01-01

    This publication reports a design optimization study of all-vanadium redox flow batteries (VRBs), including performance testing, distributed current measurements, and flow visualization. Additionally, a computational flow simulation is used to support the conclusions made from the experimental results. This study demonstrates that optimal flow field design is not simply related to the best architecture, but is instead a more complex interplay between architecture, electrode properties, electrolyte properties, and operating conditions which combine to affect electrode convective transport. For example, an interdigitated design outperforms a serpentine design at low flow rates and with a thin electrode, accessing up to an additional 30% of discharge capacity; but a serpentine design can match the available discharge capacity of the interdigitated design by increasing the flow rate or the electrode thickness due to differing responses between the two flow fields. The results of this study should be useful to design engineers seeking to optimize VRB systems through enhanced performance and reduced pressure drop.

  8. Realizing improved patient care through human-centered operating room design: a human factors methodology for observing flow disruptions in the cardiothoracic operating room.

    PubMed

    Palmer, Gary; Abernathy, James H; Swinton, Greg; Allison, David; Greenstein, Joel; Shappell, Scott; Juang, Kevin; Reeves, Scott T

    2013-11-01

    Human factors engineering has allowed a systematic approach to the evaluation of adverse events in a multitude of high-stake industries. This study sought to develop an initial methodology for identifying and classifying flow disruptions in the cardiac operating room (OR). Two industrial engineers with expertise in human factors workflow disruptions observed 10 cardiac operations from the moment the patient entered the OR to the time they left for the intensive care unit. Each disruption was fully documented on an architectural layout of the OR suite and time-stamped during each phase of surgery (preoperative [before incision], operative [incision to skin closure], and postoperative [skin closure until the patient leaves the OR]) to synchronize flow disruptions between the two observers. These disruptions were then categorized. The two observers made a total of 1,158 observations. After the elimination of duplicate observations, a total of 1,080 observations remained to be analyzed. These disruptions were distributed into six categories such as communication, usability, physical layout, environmental hazards, general interruptions, and equipment failures. They were further organized into 33 subcategories. The most common disruptions were related to OR layout and design (33%). By using the detailed architectural diagrams, the authors were able to clearly demonstrate for the first time the unique role that OR design and equipment layout has on the generation of physical layout flow disruptions. Most importantly, the authors have developed a robust taxonomy to describe the flow disruptions encountered in a cardiac OR, which can be used for future research and patient safety improvements.

  9. FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification

    PubMed Central

    Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao

    2012-01-01

    This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640

  10. Traffic and Driving Simulator Based on Architecture of Interactive Motion.

    PubMed

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.

  11. Traffic and Driving Simulator Based on Architecture of Interactive Motion

    PubMed Central

    Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza

    2015-01-01

    This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711

  12. On the calculation of dynamic and heat loads on a three-dimensional body in a hypersonic flow

    NASA Astrophysics Data System (ADS)

    Bocharov, A. N.; Bityurin, V. A.; Evstigneev, N. M.; Fortov, V. E.; Golovin, N. N.; Petrovskiy, V. P.; Ryabkov, O. I.; Teplyakov, I. O.; Shustov, A. A.; Solomonov, Yu S.

    2018-01-01

    We consider a three-dimensional body in a hypersonic flow at zero angle of attack. Our aim is to estimate heat and aerodynamic loads on specific body elements. We are considering a previously developed code to solve coupled heat- and mass-transfer problem. The change of the surface shape is taken into account by formation of the iterative process for the wall material ablation. The solution is conducted on the multi-graphics-processing-unit (multi-GPU) cluster. Five Mach number points are considered, namely for M = 20-28. For each point we estimate body shape after surface ablation, heat loads on the surface and aerodynamic loads on the whole body and its elements. The latter is done using Gauss-type quadrature on the surface of the body. The comparison of the results for different Mach numbers is performed. We also estimate the efficiency of the Navier-Stokes code on multi-GPU and central processing unit architecture for the coupled heat and mass transfer problem.

  13. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  14. The role of hydrodynamics in shaping the composition and architecture of epilithic biofilms in fluvial ecosystems.

    PubMed

    Risse-Buhl, Ute; Anlanger, Christine; Kalla, Katalin; Neu, Thomas R; Noss, Christian; Lorke, Andreas; Weitere, Markus

    2017-12-15

    Previous laboratory and on-site experiments have highlighted the importance of hydrodynamics in shaping biofilm composition and architecture. In how far responses to hydrodynamics can be found in natural flows under the complex interplay of environmental factors is still unknown. In this study we investigated the effect of near streambed turbulence in terms of turbulent kinetic energy (TKE) on the composition and architecture of biofilms matured in two mountainous streams differing in dissolved nutrient concentrations. Over both streams, TKE significantly explained 7% and 8% of the variability in biofilm composition and architecture, respectively. However, effects were more pronounced in the nutrient richer stream, where TKE significantly explained 12% and 3% of the variability in biofilm composition and architecture, respectively. While at lower nutrient concentrations seasonally varying factors such as stoichiometry of dissolved nutrients (N/P ratio) and light were more important and explained 41% and 6% of the variability in biofilm composition and architecture, respectively. Specific biofilm features such as elongated ripples and streamers, which were observed in response to the uniform and unidirectional flow in experimental settings, were not observed. Microbial biovolume and surface area covered by the biofilm canopy increased with TKE, while biofilm thickness and porosity where not affected or decreased. These findings indicate that under natural flows where near bed flow velocities and turbulence intensities fluctuate with time and space, biofilms became more compact. They spread uniformly on the mineral surface as a film of densely packed coccoid cells appearing like cobblestone pavement. The compact growth of biofilms seemed to be advantageous for resisting hydrodynamic shear forces in order to avoid displacement. Thus, near streambed turbulence can be considered as important factor shaping the composition and architecture of biofilms grown under natural flows. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Fabrication of microfluidic architectures for optimal flow rate and concentration measurement for lab on chip application

    NASA Astrophysics Data System (ADS)

    Adam, Tijjani; Hashim, U.

    2017-03-01

    Optimum flow in micro channel for sensing purpose is challenging. In this study, The optimizations of the fluid sample flows are made through the design and characterization of the novel microfluidics' architectures to achieve the optimal flow rate in the micro channels. The biocompatibility of the Polydimetylsiloxane (Sylgard 184 silicon elastomer) polymer used to fabricate the device offers avenue for the device to be implemented as the universal fluidic delivery system for bio-molecules sensing in various bio-medical applications. The study uses the following methodological approaches, designing a novel microfluidics' architectures by integrating the devices on a single 4 inches silicon substrate, fabricating the designed microfluidic devices using low-cost solution soft lithography technique, characterizing and validating the flow throughput of urine samples in the micro channels by generating pressure gradients through the devices' inlets. The characterization on the urine samples flow in the micro channels have witnessed the constant flow throughout the devices.

  16. Variations in fluvial style in the Westwater Canyon Member, Morrison formation (Jurassic), San Juan basin, Colorado plateau

    USGS Publications Warehouse

    Miall, A.D.; Turner-Peterson, C. E.

    1989-01-01

    Techniques of architectural element analysis and lateral profiling have been applied to the fluvial Westwater Canyon Member of the Morrison Formation (Jurassic) in southern San Juan Basin. On a large scale, the sandstone-body architecture consists mainly of a series of tabular sandstone sheets 5-15 m thick and hundreds of meters wide, separated by thin fine-grained units. Internally these sheets contain lateral accretion surfaces and are cut by channels 10-20 m deep and at least 250 m wide. On a more detailed scale, interpretations made from large-scale photomosaics show a complex of architectural elements and bounding surfaces. Typical indicators of moderate- to high-sinuosity channels (lateral accretion deposits) coexist in the same outcrop with downstream-accreted macroform deposits that are typical of sand flats of low-sinuosity, multiple-channel rivers. Broad, deep channels with gently to steeply dipping margins were mapped in several of the outcrops by carefully tracing major bounding surfaces. Locally thick accumulations of plane-laminated and low-angle cross-laminated sandstone lithofacies suggest rapid flow, probably transitional to upper flow regime conditions. Such a depositional style is most typical of ephemeral rivers or those periodically undergoing major seasonal (or more erratic) stage fluctuations, an interpretation consistent with independent mineralogical evidence of aridity. Fining-upward sequences are rare in the project area, contrary to the descriptions of Campbell (1976). The humid alluvial fan model of Galloway (1978) cannot be substantiated and, similarly, the architectural model of Campbell (1976) requires major revision. Comparisons with the depositional architecture of the large Indian rivers, such as the Ganges and Brahmaputra, still seem reasonable, as originally proposed by Campbell (1976), although there is now convincing evidence for aridity and for major stage fluctuations, which differs both from those modern rivers and Campbell's interpretation. ?? 1989.

  17. Estimating the system price of redox flow batteries for grid storage

    NASA Astrophysics Data System (ADS)

    Ha, Seungbum; Gallagher, Kevin G.

    2015-11-01

    Low-cost energy storage systems are required to support extensive deployment of intermittent renewable energy on the electricity grid. Redox flow batteries have potential advantages to meet the stringent cost target for grid applications as compared to more traditional batteries based on an enclosed architecture. However, the manufacturing process and therefore potential high-volume production price of redox flow batteries is largely unquantified. We present a comprehensive assessment of a prospective production process for aqueous all vanadium flow battery and nonaqueous lithium polysulfide flow battery. The estimated investment and variable costs are translated to fixed expenses, profit, and warranty as a function of production volume. When compared to lithium-ion batteries, redox flow batteries are estimated to exhibit lower costs of manufacture, here calculated as the unit price less materials costs, owing to their simpler reactor (cell) design, lower required area, and thus simpler manufacturing process. Redox flow batteries are also projected to achieve the majority of manufacturing scale benefits at lower production volumes as compared to lithium-ion. However, this advantage is offset due to the dramatically lower present production volume of flow batteries compared to competitive technologies such as lithium-ion.

  18. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  19. GPU computing of compressible flow problems by a meshless method with space-filling curves

    NASA Astrophysics Data System (ADS)

    Ma, Z. H.; Wang, H.; Pu, S. H.

    2014-04-01

    A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.

  20. Sedimentary architecture of a sub-lacustrine debris fan: Eocene Dongying Depression, Bohai Bay Basin, east China

    NASA Astrophysics Data System (ADS)

    Liu, Jianping; Xian, Benzhong; Wang, Junhui; Ji, Youliang; Lu, Zhiyong; Liu, Saijun

    2017-12-01

    The sedimentary architectures of submarine/sublacustrine fans are controlled by sedimentary processes, geomorphology and sediment composition in sediment gravity flows. To advance understanding of sedimentary architecture of debris fans formed predominantly by debris flows in deep-water environments, a sub-lacustrine fan (Y11 fan) within a lacustrine succession has been identified and studied through the integration of core data, well logging data and 3D seismic data in the Eocene Dongying Depression, Bohai Bay Basin, east China. Six types of resedimented lithofacies can be recognized, which are further grouped into five broad lithofacies associations. Quantification of gravity flow processes on the Y11 fan is suggested by quantitative lithofacies analysis, which demonstrates that the fan is dominated by debris flows, while turbidity currents and sandy slumps are less important. The distribution, geometry and sedimentary architecture are documented using well data and 3D seismic data. A well-developed depositional lobe with a high aspect ratio is identified based on a sandstone isopach map. Canyons and/or channels are absent, which is probably due to the unsteady sediment supply from delta-front collapse. Distributary tongue-shaped debris flow deposits can be observed at different stages of fan growth, suggesting a lobe constructed by debrite tongue complexes. Within each stage of the tongue complexes, architectural elements are interpreted by wireline log motifs showing amalgamated debrite tongues, which constitute the primary fan elements. Based on lateral lithofacies distribution and vertical sequence analysis, it is proposed that lakefloor erosion, entrainment and dilution in the flow direction lead to an organized distribution of sandy debrites, muddy debrites and turbidites on individual debrite tongues. Plastic rheology of debris flows combined with fault-related topography are considered the major factors that control sediment distribution and fan architecture. An important implication of this study is that a deep-water depositional model for debrite-dominated systems was proposed, which may be applicable to other similar deep-water environments.

  1. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  2. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  3. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  4. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    NASA Astrophysics Data System (ADS)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  5. Simulating the heterogeneity in braided channel belt deposits: 1. A geometric-based methodology and code

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.

    2010-04-01

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.

  6. Simulating the Heterogeneity in Braided Channel Belt Deposits: Part 1. A Geometric-Based Methodology and Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less

  7. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  8. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network.

    PubMed

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-28

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  9. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    PubMed Central

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-01-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296

  10. Experimental investigation of active rib stitch knitted architecture for flow control applications

    NASA Astrophysics Data System (ADS)

    Abel, Julianna M.; Mane, Poorna; Pascoe, Benjamin; Luntz, Jonathan; Brei, Diann

    2010-04-01

    Actively manipulating flow characteristics around the wing can enhance the high-lift capability and reduce drag; thereby, increasing fuel economy, improving maneuverability and operation over diverse flight conditions which enables longer, more varied missions. Active knits, a novel class of cellular structural smart material actuator architectures created by continuous, interlocked loops of stranded active material, produce distributed actuation that can actively manipulate the local surface of the aircraft wing to improve flow characteristics. Rib stitch active knits actuate normal to the surface, producing span-wise discrete periodic arrays that can withstand aerodynamic forces while supplying the necessary displacement for flow control. This paper presents a preliminary experimental investigation of the pressuredisplacement actuation performance capabilities of a rib stitch active knit based upon shape memory alloy (SMA) wire. SMA rib stitch prototypes in both individual form and in stacked and nestled architectures were experimentally tested for their quasi-static load-displacement characteristics, verifying the parallel and series relationships of the architectural configurations. The various configurations tested demonstrated the potential of active knits to generate the required level of distributed surface displacements while under aerodynamic level loads for various forms of flow control.

  11. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  12. A static data flow simulation study at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Howard, Lauri S.

    1987-01-01

    Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.

  13. Method for siting detectors within a facility

    DOEpatents

    Gleason, Nathaniel Jeremy Meyer

    2007-12-11

    A method, system and article of manufacture of siting one or more detectors in a facility represented with zones are provided. Signals S.sub.i,j representing an effect in zone j in response to a release of contaminant in zone i for one or more flow conditions are provided. A candidate architecture has one or more candidate zones. A limiting case signal is determined for each flow condition for multiple candidate architectures. The limiting case signal is a smallest system signal of multiple system signals associated with a release in a zone. Each system signal is a maximum one of the signals representing the effect in the candidate zones from the release in one zone for the flow condition. For each candidate architecture, a robust limiting case signal is determined based on a minimum of the limiting case signals. One candidate architecture is selected based on the robust limiting case signals.

  14. Data flow language and interpreter for a reconfigurable distributed data processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, A.D.; Heath, J.R.

    1982-01-01

    An analytic language and an interpreter whereby an applications data flow graph may serve as an input to a reconfigurable distributed data processor is proposed. The architecture considered consists of a number of loosely coupled computing elements (CES) which may be linked to data and file memories through fully nonblocking interconnect networks. The real-time performance of such an architecture depends upon its ability to alter its topology in response to changes in application, asynchronous data rates and faults. Such a data flow language enhances the versatility of a reconfigurable architecture by allowing the user to specify the machine's topology atmore » a very high level. 11 references.« less

  15. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  16. Novel memory architecture for video signal processor

    NASA Astrophysics Data System (ADS)

    Hung, Jen-Sheng; Lin, Chia-Hsing; Jen, Chein-Wei

    1993-11-01

    An on-chip memory architecture for video signal processor (VSP) is proposed. This memory structure is a two-level design for the different data locality in video applications. The upper level--Memory A provides enough storage capacity to reduce the impact on the limitation of chip I/O bandwidth, and the lower level--Memory B provides enough data parallelism and flexibility to meet the requirements of multiple reconfigurable pipeline function units in a single VSP chip. The needed memory size is decided by the memory usage analysis for video algorithms and the number of function units. Both levels of memory adopted a dual-port memory scheme to sustain the simultaneous read and write operations. Especially, Memory B uses multiple one-read-one-write memory banks to emulate the real multiport memory. Therefore, one can change the configuration of Memory B to several sets of memories with variable read/write ports by adjusting the bus switches. Then the numbers of read ports and write ports in proposed memory can meet requirement of data flow patterns in different video coding algorithms. We have finished the design of a prototype memory design using 1.2- micrometers SPDM SRAM technology and will fabricated it through TSMC, in Taiwan.

  17. The Design and Implementation of a Data Flow Multiprocessor.

    DTIC Science & Technology

    1981-12-01

    to thank Captain Charles Papp who taught me how to use the logic analyzer and the storage oscilloscope. Without these tools, I could never have...debugged and repaired the microprocessors. Finally, I wish to thank my thesis readers, Major Charles Lillie and Major Walt Seward, for taking valuable time...Neumann/ Babbage architecture with the a data flow architecture. The next section describes the benefits of data flow computing. The following section

  18. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  19. Component-cost and performance based comparison of flow and static batteries

    NASA Astrophysics Data System (ADS)

    Hopkins, Brandon J.; Smith, Kyle C.; Slocum, Alexander H.; Chiang, Yet-Ming

    2015-10-01

    Flow batteries are a promising grid-storage technology that is scalable, inherently flexible in power/energy ratio, and potentially low cost in comparison to conventional or ;static; battery architectures. Recent advances in flow chemistries are enabling significantly higher energy density flow electrodes. When the same battery chemistry can arguably be used in either a flow or static electrode design, the relative merits of either design choice become of interest. Here, we analyze the costs of the electrochemically active stack for both architectures under the constraint of constant energy efficiency and charge and discharge rates, using as case studies the aqueous vanadium-redox chemistry, widely used in conventional flow batteries, and aqueous lithium-iron-phosphate (LFP)/lithium-titanium-phosphate (LTP) suspensions, an example of a higher energy density suspension-based electrode. It is found that although flow batteries always have a cost advantage (kWh-1) at the stack level modeled, the advantage is a strong function of flow electrode energy density. For the LFP/LTP case, the cost advantages decreases from ∼50% to ∼10% over experimentally reasonable ranges of suspension loading. Such results are important input for design choices when both battery architectures are viable options.

  20. Electro-Optic Computing Architectures. Volume I

    DTIC Science & Technology

    1998-02-01

    The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit (OW

  1. Motivation and flow: toward an understanding of the dynamics of the relation in architecture students.

    PubMed

    Mills, Maura J; Fullagar, Clive J

    2008-09-01

    The authors investigated the relation between motivation and flow in a sample of 327 architecture students. Specifically, they investigated the relation between flow and several levels of intrinsic and extrinsic motivation, as well as amotivation. They also assessed the need for autonomy in moderating the relation between intrinsic motivation and engagement. Results indicated a significant relation between flow experiences in academic activities and the more self-determined forms of intrinsic motivation, but not for extrinsic motivation. The need for autonomy moderated the relation between flow and intrinsic motivation. These results are discussed in the context of understanding flow as an intrinsically motivating state and a viable construct for understanding engagement.

  2. Content analysis in information flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grusho, Alexander A.; Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow; Grusho, Nick A.

    The paper deals with architecture of content recognition system. To analyze the problem the stochastic model of content recognition in information flows was built. We proved that under certain conditions it is possible to solve correctly a part of the problem with probability 1, viewing a finite section of the information flow. That means that good architecture consists of two steps. The first step determines correctly certain subsets of contents, while the second step may demand much more time for true decision.

  3. Thermal Control System Automation Project (TCSAP)

    NASA Technical Reports Server (NTRS)

    Boyer, Roger L.

    1991-01-01

    Information is given in viewgraph form on the Space Station Freedom (SSF) Thermal Control System Automation Project (TCSAP). Topics covered include the assembly of the External Thermal Control System (ETCS); the ETCS functional schematic; the baseline Fault Detection, Isolation, and Recovery (FDIR), including the development of a knowledge based system (KBS) for application of rule based reasoning to the SSF ETCS; TCSAP software architecture; the High Fidelity Simulator architecture; the TCSAP Runtime Object Database (RODB) data flow; KBS functional architecture and logic flow; TCSAP growth and evolution; and TCSAP relationships.

  4. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  5. Scale-dependent genetic structure of the Idaho giant salamander (Dicamptodon aterrimus) in stream networks

    Treesearch

    Lindy B. Mullen; H. Arthur Woods; Michael K. Schwartz; Adam J. Sepulveda; Winsor H. Lowe

    2010-01-01

    The network architecture of streams and rivers constrains evolutionary, demographic and ecological processes of freshwater organisms. This consistent architecture also makes stream networks useful for testing general models of population genetic structure and the scaling of gene flow. We examined genetic structure and gene flow in the facultatively paedomorphic Idaho...

  6. A mixed acid based vanadium-cerium redox flow battery with a zero-gap serpentine architecture

    NASA Astrophysics Data System (ADS)

    Leung, P. K.; Mohamed, M. R.; Shah, A. A.; Xu, Q.; Conde-Duran, M. B.

    2015-01-01

    This paper presents the performance of a vanadium-cerium redox flow battery using conventional and zero-gap serpentine architectures. Mixed-acid solutions based on methanesulfonate-sulfate anions (molar ratio 3:1) are used to enhance the solubilities of the vanadium (>2.0 mol dm-3) and cerium species (>0.8 mol dm-3), thus achieving an energy density (c.a. 28 Wh dm-3) comparable to that of conventional all-vanadium redox flow batteries (20-30 Wh dm-3). Electrochemical studies, including cyclic voltammetry and galvanostatic cycling, show that both vanadium and cerium active species are suitable for energy storage applications in these electrolytes. To take advantage of the high open-circuit voltage (1.78 V), improved mass transport and reduced internal resistance are facilitated by the use of zero-gap flow field architecture, which yields a power density output of the battery of up to 370 mW cm-2 at a state-of-charge of 50%. In a charge-discharge cycle at 200 mA cm-2, the vanadium-cerium redox flow battery with the zero-gap architecture is observed to discharge at a cell voltage of c.a. 1.35 V with a coulombic efficiency of up to 78%.

  7. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  8. Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs

    NASA Astrophysics Data System (ADS)

    Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.

    2006-12-01

    The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real-time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.

  9. Cross layer optimization for cloud-based radio over optical fiber networks

    NASA Astrophysics Data System (ADS)

    Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong; Yang, Hui; Meng, Luoming

    2016-07-01

    To adapt the 5G communication, the cloud radio access network is a paradigm introduced by operators which aggregates all base stations computational resources into a cloud BBU pool. The interaction between RRH and BBU or resource schedule among BBUs in cloud have become more frequent and complex with the development of system scale and user requirement. It can promote the networking demand among RRHs and BBUs, and force to form elastic optical fiber switching and networking. In such network, multiple stratum resources of radio, optical and BBU processing unit have interweaved with each other. In this paper, we propose a novel multiple stratum optimization (MSO) architecture for cloud-based radio over optical fiber networks (C-RoFN) with software defined networking. Additionally, a global evaluation strategy (GES) is introduced in the proposed architecture. MSO can enhance the responsiveness to end-to-end user demands and globally optimize radio frequency, optical spectrum and BBU processing resources effectively to maximize radio coverage. The feasibility and efficiency of the proposed architecture with GES strategy are experimentally verified on OpenFlow-enabled testbed in terms of resource occupation and path provisioning latency.

  10. Vacuum-bag-only processing of composites

    NASA Astrophysics Data System (ADS)

    Thomas, Shad

    Ultrasonic imaging in the C-scan mode in conjunction with the amplitude of the reflected signal was used to measure flow rates of an epoxy resin film penetrating through the thickness of single layers of woven carbon fabric. Assemblies, comprised of a single layer of fabric and film, were vacuum-bagged and ultrasonically scanned in a water tank during impregnation at 50°C, 60°C, 70°C, and 80°C. Measured flow rates were plotted versus inverse viscosity to determine the permeability in the thin film, non-saturated system. The results demonstrated that ultrasonic imaging in the C-scan mode is an effective method of measuring z-direction resin flow through a single layer of fabric. The permeability values determined in this work were consistent with permeability values reported in the literature. Capillary flow was not observed at the temperatures and times required for pressurized flow to occur. The flow rate at 65°C was predicted from the linear plot of flow rate versus inverse viscosity. The effects of fabric architecture on through-thickness flow rates during impregnation of an epoxy resin film were measured by ultrasonic imaging. Multilayered laminates comprised of woven carbon fabrics and epoxy films (prepregs) were fabricated by vacuum-bagging. Ultrasonic imaging was performed in a heated water tank (65°C) during impregnation. Impregnation rates showed a strong dependence on fabric architecture, despite similar areal densities. Impregnation rates are directly affected by inter-tow spacing and tow nesting, which depend on fabric architecture, and are indirectly affected by areal densities. A new method of predicting resin infusion rates in prepreg and resin film infusion processes was proposed. The Stokes equation was used to derive an equation to predict the impregnation rate of laminates as a function of fabric architecture. Flow rate data previously measured by ultrasound was analyzed with the new equation and the Kozeny-Carman equation. A fiber interaction parameter was determined as a function of fabric architecture. The derived equation is straight-forward to use, unlike the Kozeny-Carman equation. The results demonstrated that the newly derived equation can be used to predict the resin infusion rate of multilayer laminates.

  11. Electro-Optic Computing Architectures: Volume II. Components and System Design and Analysis

    DTIC Science & Technology

    1998-02-01

    The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit

  12. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  13. Network analysis of patient flow in two UK acute care hospitals identifies key sub-networks for A&E performance

    PubMed Central

    Stringer, Clive; Beeknoo, Neeraj

    2017-01-01

    The topology of the patient flow network in a hospital is complex, comprising hundreds of overlapping patient journeys, and is a determinant of operational efficiency. To understand the network architecture of patient flow, we performed a data-driven network analysis of patient flow through two acute hospital sites of King’s College Hospital NHS Foundation Trust. Administration databases were queried for all intra-hospital patient transfers in an 18-month period and modelled as a dynamic weighted directed graph. A ‘core’ subnetwork containing only 13–17% of all edges channelled 83–90% of the patient flow, while an ‘ephemeral’ network constituted the remainder. Unsupervised cluster analysis and differential network analysis identified sub-networks where traffic is most associated with A&E performance. Increased flow to clinical decision units was associated with the best A&E performance in both sites. The component analysis also detected a weekend effect on patient transfers which was not associated with performance. We have performed the first data-driven hypothesis-free analysis of patient flow which can enhance understanding of whole healthcare systems. Such analysis can drive transformation in healthcare as it has in industries such as manufacturing. PMID:28968472

  14. Novel elastic protection against DDF failures in an enhanced software-defined SIEPON

    NASA Astrophysics Data System (ADS)

    Pakpahan, Andrew Fernando; Hwang, I.-Shyan; Yu, Yu-Ming; Hsu, Wu-Hsiao; Liem, Andrew Tanny; Nikoukar, AliAkbar

    2017-07-01

    Ever-increasing bandwidth demands on passive optical networks (PONs) are pushing the utilization of every fiber strand to its limit. This is mandating comprehensive protection until the end of the distribution drop fiber (DDF). Hence, it is important to provide refined protection with an advanced fault-protection architecture and recovery mechanism that is able to cope with various DDF failures. We propose a novel elastic protection against DDF failures that incorporates a software-defined networking (SDN) capability and a bus protection line to enhance the resiliency of the existing Service Interoperability in Ethernet Passive Optical Networks (SIEPON) system. We propose the addition of an integrated SDN controller and flow tables to the optical line terminal and optical network units (ONUs) in order to deliver various DDF protection scenarios. The proposed architecture enables flexible assignment of backup ONU(s) in pre/post-fault conditions depending on the PON traffic load. A transient backup ONU and multiple backup ONUs can be deployed in the pre-fault and post-fault scenarios, respectively. Our extensively discussed simulation results show that our proposed architecture provides better overall throughput and drop probability compared to the architecture with a fixed DDF protection mechanism. It does so while still maintaining overall QoS performance in terms of packet delay, mean jitter, packet loss, and throughput under various fault conditions.

  15. 75 FR 5637 - Culturally Significant Objects Imported for Exhibition Determinations: “Architecture as Icon...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... Determinations: ``Architecture as Icon: Perception and Representation of Architecture in Byzantine Art'' SUMMARY... objects to be included in the exhibition ``Architecture as Icon: Perception and Representation of Architecture in Byzantine Art,'' imported from abroad for temporary exhibition within the United States, are of...

  16. Pangea break-up: from passive to active margin in the Colombian Caribbean Realm

    NASA Astrophysics Data System (ADS)

    Gómez, Cristhian; Kammer, Andreas

    2017-04-01

    The break-up of Western Pangea has lead to a back-arc type tectonic setting along the periphery of Gondwana, with the generation of syn-rift basins filled with sedimentary and volcanic sequences during the Middle to Late Triassic. The Indios and Corual formations in the Santa Marta massif of Northern Andes were deposited in this setting. In this contribution we elaborate a stratigraphic model for both the Indios and Corual formations, based on the description and classification of sedimentary facies and their architecture and a provenance analysis. Furthermore, geotectonic environments for volcanic and volcanoclastic rock of both units are postulated. The Indios Formation is a shallow-marine syn-rift basin fill and contains gravity flows deposits. This unit is divided into three segments; the lower and upper segments are related to fan-deltas, while the middle segment is associated to offshore deposits with lobe incursions of submarine fans. Volcanoclastic and volcanic rocks of the Indios and Corual formations are bimodal in composition and are associated to alkaline basalts. Volcanogenic deposits comprise debris, pyroclastic and lava flows of both effusive and explosive eruptions. These units record multiple phases of rifting and reveal together a first stage in the break-up of Pangea during Middle and Late Triassic in North Colombia.

  17. Architecture-Based Unit Testing of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; McComas, David; Bartholomew, Maureen; Slegel, Steve; Medina, Barbara

    2010-01-01

    This paper presents an analysis of the unit testing approach developed and used by the Core Flight Software (CFS) product line team at the NASA GSFC. The goal of the analysis is to understand, review, and reconunend strategies for improving the existing unit testing infrastructure as well as to capture lessons learned and best practices that can be used by other product line teams for their unit testing. The CFS unit testing framework is designed and implemented as a set of variation points, and thus testing support is built into the product line architecture. The analysis found that the CFS unit testing approach has many practical and good solutions that are worth considering when deciding how to design the testing architecture for a product line, which are documented in this paper along with some suggested innprovennents.

  18. A simple method for the evaluation of microfluidic architecture using flow quantitation via a multiplexed fluidic resistance measurement.

    PubMed

    Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P

    2010-08-07

    Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.

  19. Sedimentology and sequence stratigraphy of the Lower Jurassic Kayenta Formation, Colorado Plateau, United States

    NASA Astrophysics Data System (ADS)

    Sanabria, Diego Ignacio

    2001-07-01

    Detailed outcrop analysis of the Lower Jurassic Kayenta Formation provides the basis for the formulation of a new sequence stratigraphic model for arid to semi-arid continental deposits and the generation of a comprehensive set of sedimentologic criteria for the recognition of ephemeral stream deposits. Criteria for the recognition of ephemeral deposits in the ancient record were divided into three categories according to the scale of the feature being considered. The first category takes into account sedimentary structures commonly found in the record of ephemeral stream deposits including hyperconcentrated and debris flow deposits, planar parallel bedding, sigmoidal cross-bedding, hummocky cross-bedding, climbing ripple lamination, scour-and-fill structures, convolute bedding, overturned cross-bedding, ball-and-pillow structures, pocket structures, pillars, mud curls, flaser lamination, algal lamination, termite nests, and vertebrate tracks. The second category is concerned with the mesoscale facies architecture of ephemeral stream deposits and includes waning flow successions, bedform climb, downstream accretion, terminal wadi splays, and channel-fill successions indicating catastrophic flooding. At the large-scale facies architecture level, the third category, ephemeral stream deposits are commonly arranged in depositional units characterized by a downstream decrease in grain size and scale of sedimentary structures resulting from deposition in terminal fan systems. Outcrops of the Kayenta Formation and its transition to the Navajo Sandstone along the Vermilion and Echo Cliffs of Northern Arizona indicate that wet/dry climatic cyclicity exerted a major control on regional facies architecture. Two scales of wet/dry climatic cyclicity can be recognized in northern Arizona. Three sequence sets composed of rocks accumulated under predominantly dry or wet conditions are the expression of long-term climatic cyclicity. Short-term climatic cyclicity, on the other hand, is represented by high-frequency sequences composed of eolian or ephemeral fluvial deposits overlain by perennial fluvial sediments. Increased evapotranspiration rates, depressed water tables, and accumulation of eolian or ephemeral fluvial deposits characterize the dry portion of these cycles. The wet part of the cycles is marked by an increase in precipitation and the establishment of perennial fluvial systems and lacustrine basins. This depositional model constitutes a valuable tool for correlation of similar deposits in the subsurface.

  20. The Elimination of Transfer Distances Is an Important Part of Hospital Design.

    PubMed

    Karvonen, Sauli; Nordback, Isto; Elo, Jussi; Havulinna, Jouni; Laine, Heikki-Jussi

    2017-04-01

    The objective of the present study was to describe how a specific patient flow analysis with from-to charts can be used in hospital design and layout planning. As part of a large renewal project at a university hospital, a detailed patient flow analysis was applied to planning the musculoskeletal surgery unit (orthopedics and traumatology, hand surgery, and plastic surgery). First, the main activities of the unit were determined. Next, the routes of all patients treated over the course of 1 year were studied, and their physical movements in the current hospital were calculated. An ideal layout of the new hospital was then generated to minimize transfer distances by placing the main activities with close to each other, according to the patient flow analysis. The actual architectural design was based on the ideal layout plan. Finally, we compared the current transfer distances to the distances patients will move in the new hospital. The methods enabled us to estimate an approximate 50% reduction in transfer distances for inpatients (from 3,100 km/year to 1,600 km/year) and 30% reduction for outpatients (from 2,100 km/year to 1,400 km/year). Patient transfers are nonvalue-added activities. This study demonstrates that a detailed patient flow analysis with from-to charts can substantially shorten transfer distances, thereby minimizing extraneous patient and personnel movements. This reduction supports productivity improvement, cross-professional teamwork, and patient safety by placing all patient flow activities close to each other. Thus, this method is a valuable additional tool in hospital design.

  1. An Autonomous Sensor System Architecture for Active Flow and Noise Control Feedback

    NASA Technical Reports Server (NTRS)

    Humphreys, William M, Jr.; Culliton, William G.

    2008-01-01

    Multi-channel sensor fusion represents a powerful technique to simply and efficiently extract information from complex phenomena. While the technique has traditionally been used for military target tracking and situational awareness, a study has been successfully completed that demonstrates that sensor fusion can be applied equally well to aerodynamic applications. A prototype autonomous hardware processor was successfully designed and used to detect in real-time the two-dimensional flow reattachment location generated by a simple separated-flow wind tunnel model. The success of this demonstration illustrates the feasibility of using autonomous sensor processing architectures to enhance flow control feedback signal generation.

  2. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  3. The TurboLAN project. Phase 1: Protocol choices for high speed local area networks. Phase 2: TurboLAN Intelligent Network Adapter Card, (TINAC) architecture

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The hardware and the software architecture of the TurboLAN Intelligent Network Adapter Card (TINAC) are described. A high level as well as detailed treatment of the workings of various components of the TINAC are presented. The TINAC is divided into the following four major functional units: (1) the network access unit (NAU); (2) the buffer management unit; (3) the host interface unit; and (4) the node processor unit.

  4. ASGPR-Mediated Uptake of Multivalent Glycoconjugates for Drug Delivery in Hepatocytes.

    PubMed

    Monestier, Marie; Charbonnier, Peggy; Gateau, Christelle; Cuillel, Martine; Robert, Faustine; Lebrun, Colette; Mintz, Elisabeth; Renaudet, Olivier; Delangle, Pascale

    2016-04-01

    Liver cells are an essential target for drug delivery in many diseases. The hepatocytes express the asialoglycoprotein receptor (ASGPR), which promotes specific uptake by means of N-acetylgalactosamine (GalNAc) recognition. In this work, we designed two different chemical architectures to treat Wilson's disease by intracellular copper chelation. Two glycoconjugates functionalized with three or four GalNAc units each were shown to enter hepatic cells and chelate copper. Here, we studied two series of compounds derived from these glycoconjugates to find key parameters for the targeting of human hepatocytes. Efficient cellular uptake was demonstrated by flow cytometry using HepG2 human heptic cells that express the human oligomeric ASGPR. Dissociation constants in the nanomolar range showed efficient multivalent interactions with the receptor. Both architectures were therefore concluded to be able to compete with endogeneous asialoglycoproteins and serve as good vehicles for drug delivery in hepatocytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  6. Synthesis of branched polymers under continuous-flow microprocess: an improvement of the control of macromolecular architectures.

    PubMed

    Bally, Florence; Serra, Christophe A; Brochon, Cyril; Hadziioannou, Georges

    2011-11-15

    Polymerization reactions can benefit from continuous-flow microprocess in terms of kinetics control, reactants mixing or simply efficiency when high-throughput screening experiments are carried out. In this work, we perform for the first time the synthesis of branched macromolecular architecture through a controlled/'living' polymerization technique, in tubular microreactor. Just by tuning process parameters, such as flow rates of the reactants, we manage to generate a library of polymers with various macromolecular characteristics. Compared to conventional batch process, polymerization kinetics shows a faster initiation step and more interestingly an improved branching efficiency. Due to reduced diffusion pathway, a characteristic of microsystems, it is thus possible to reach branched polymers exhibiting a denser architecture, and potentially a higher functionality for later applications. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  8. Supercritical flows and their control on the architecture and facies of small-radius sand-rich fan lobes

    NASA Astrophysics Data System (ADS)

    Postma, George; Kleverlaan, Kick

    2018-02-01

    New insights into flow characteristics of supercritical, high-density turbidity currents initiated renewed interest in a sand-rich lobe complex near the hamlet of Mizala in the Sorbas Basin (Tortonian, SE Spain). The field study was done using drone-made images taken along bed strike in combination with physical tracing of bounding surfaces and section logging. The studied lobe systems show a consistent built-up of lobe elements of 1.5-2.0 m thick, which form the building 'blocks' of the lobe system. The stacking of lobe elements shows lateral shift and compensational relief infill. The new model outlined in this paper highlights three stages of fan lobe development: I. an early aggradational stage with lobe elements characterized by antidune and traction-carpet bedforms and burrowed mud intervals (here called 'distal fan' deposits); II. a progradational stage, where the distal fan deposits are truncated by lobe elements of amalgamated sandy to gravelly units characterized by cyclic step bedform facies (designated as 'supra fan' deposits). The supra fan is much more channelized and scoured and of higher flow energy than the distal-fan. Aggradation of the supra-fan is terminated by a 'pappy' pebbly sandstone and by substrate liquefaction, 'pappy' referring to a typical, porridge-like texture indicating rapid deposition under conditions of little-to-no shear. The facies-bounded termination of the supra-fan is here related to its maximum elevation, causing the lobe-feeding supercritical flow to choke and to expand upwards by a strong hydraulic jump at the channel outlet; III. a backfilling stage, characterized by backfilling of the remaining relief with progressively thinning and fining of turbidite beds and eventually with mud. The three-stage development for fan-lobe building is deducted from reoccurring architectural and facies characteristics in three successive fan-lobes. The validity of using experimental, supercritical-flow fan studies for understanding the intrinsic mechanisms in sand-rich-fan lobe development is discussed.

  9. Implications of Multi-Core Architectures on the Development of Multiple Independent Levels of Security (MILS) Compliant Systems

    DTIC Science & Technology

    2012-10-01

    REPORT 3. DATES COVERED (From - To) MAR 2010 – APR 2012 4 . TITLE AND SUBTITLE IMPLICATIONS OF MULT-CORE ARCHITECTURES ON THE DEVELOPMENT OF...Framework for Multicore Information Flow Analysis ...................................... 23 4 4.1 A Hypothetical Reference Architecture... 4 Figure 2: Pentium II Block Diagram

  10. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  11. Educational Environments.

    ERIC Educational Resources Information Center

    Yee, Roger, Ed.

    This book presents examples of the United States' most innovative new educational facilities for decision makers developing educational facilities of the future. It showcases some of the most recent and significant institutional projects from a number of the United States' top architecture and design firms. The architecture and interior design…

  12. The Investigation and Development of Low Cost Hardware Components for Proton-Exchange Membrane Fuel Cells - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George A. Marchetti

    1999-12-15

    Proton exchange membrane (PEM) fuel cell components, which would have a low-cost structure in mass production, were fabricated and tested. A fuel cell electrode structure, comprising a thin layer of graphite (50 microns) and a front-loaded platinum catalyst layer (600 angstroms), was shown to produce significant power densities. In addition, a PEM bipolar plate, comprising flexible graphite, carbon cloth flow-fields and an integrated polymer gasket, was fabricated. Power densities of a two-cell unit using this inexpensive bipolar plate architecture were shown to be comparable to state-of-the-art bipolar plates.

  13. Study of limitations and attributes of microprocessor testing techniques

    NASA Technical Reports Server (NTRS)

    Mccaskill, R.; Sohl, W. E.

    1977-01-01

    All microprocessor units have a similar architecture from which a basic test philosophy can be adopted and used to develop an approach to test each module separately in order to verify the functionality of each module within the device using the input/output pins of the device and its instruction set; test for destructive interaction between functional modules; and verify all timing, status information, and interrupt operations of the device. Block and test flow diagrams are given for the 8080, 8008, 2901, 6800, and 1802 microprocessors. Manufacturers are listed and problems encountered in testing the modules are discussed. Test equipment and methods are described.

  14. Architecture and reservoir quality of low-permeable Eocene lacustrine turbidite sandstone from the Dongying Depression, East China

    NASA Astrophysics Data System (ADS)

    Munawar, Muhammad Jawad; Lin, Chengyan; Chunmei, Dong; Zhang, Xianguo; Zhao, Haiyan; Xiao, Shuming; Azeem, Tahir; Zahid, Muhammad Aleem; Ma, Cunfei

    2018-05-01

    The architecture and quality of lacustrine turbidites that act as petroleum reservoirs are less well documented. Reservoir architecture and multiscale heterogeneity in turbidites represent serious challenges to production performance. Additionally, establishing a hierarchy profile to delineate heterogeneity is a challenging task in lacustrine turbidite deposits. Here, we report on the turbidites in the middle third member of the Eocene Shahejie Formation (Es3), which was deposited during extensive Middle to Late Eocene rifting in the Dongying Depression. Seismic records, wireline log responses, and core observations were integrated to describe the reservoir heterogeneity by delineating the architectural elements, sequence stratigraphic framework and lithofacies assemblage. A petrographic approach was adopted to constrain microscopic heterogeneity using an optical microscope, routine core analyses and X-ray diffraction (XRD) analyses. The Es3m member is interpreted as a sequence set composed of four composite sequences: CS1, CS2, CS3 and CS4. A total of forty-five sequences were identified within these four composite sequences. Sand bodies were mainly deposited as channels, levees, overbank splays, lobes and lobe fringes. The combination of fining-upward and coarsening-upward lithofacies patterns in the architectural elements produces highly complex composite flow units. Microscopic heterogeneity is produced by diagenetic alteration processes (i.e., feldspar dissolution, authigenic clay formation and quartz cementation). The widespread kaolinization of feldspar and mobilization of materials enhanced the quality of the reservoir by producing secondary enlarged pores. In contrast, the formation of pore-filling authigenic illite and illite/smectite clays reduced its permeability. Recovery rates are higher in the axial areas and smaller in the marginal areas of architectural elements. This study represents a significant insight into the reservoir architecture and heterogeneity of lacustrine turbidites, and the understanding of compartmentalization and distribution of high-quality sand reservoirs can be applied to improve primary and secondary production in these fields.

  15. Transmission control unit drive based on the AUTOSAR standard

    NASA Astrophysics Data System (ADS)

    Guo, Xiucai; Qin, Zhen

    2018-03-01

    It is a trend of automotive electronics industry in the future that automotive electronics embedded system development based on the AUTOSAR standard. AUTOSAR automotive architecture standard has proposed the transmission control unit (TCU) development architecture and designed its interfaces and configurations in detail. This essay has discussed that how to drive the TCU based on AUTOSAR standard architecture. The results show that driving the TCU with the AUTOSAR system improves reliability and shortens development cycles.

  16. Thermal Hotspots in CPU Die and It's Future Architecture

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Hu, Fu-Yuan

    Owing to the increasing core frequency and chip integration and the limited die dimension, the power densities in CPU chip have been increasing fastly. The high temperature on chip resulted by power densities threats the processor's performance and chip's reliability. This paper analyzed the thermal hotspots in die and their properties. A new architecture of function units in die - - hot units distributed architecture is suggested to cope with the problems of high power densities for future processor chip.

  17. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  18. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  19. Expanding the Use of Time-Based Metering: Multi-Center Traffic Management Advisor

    NASA Technical Reports Server (NTRS)

    Landry, Steven J.; Farley, Todd; Hoang, Ty

    2005-01-01

    Time-based metering is an efficient air traffic management alternative to the more common practice of distance-based metering (or "miles-in-trail spacing"). Despite having demonstrated significant operational benefit to airspace users and service providers, time-based metering is used in the United States for arrivals to just nine airports and is not used at all for non-arrival traffic flows. The Multi-Center Traffic Management Advisor promises to bring time-based metering into the mainstream of air traffic management techniques. Not constrained to operate solely on arrival traffic, Multi-Center Traffic Management Advisor is flexible enough to work in highly congested or heavily partitioned airspace for any and all traffic flows in a region. This broader and more general application of time-based metering is expected to bring the operational benefits of time-based metering to a much wider pool of beneficiaries than is possible with existing technology. It also promises to facilitate more collaborative traffic management on a regional basis. This paper focuses on the operational concept of the Multi-Center Traffic Management Advisor, touching also on its system architecture, field test results, and prospects for near-term deployment to the United States National Airspace System.

  20. Evaluation of a Candidate Trace Contaminant Control Subsystem Architecture: The High Velocity, Low Aspect Ratio (HVLA) Adsorption Process

    NASA Technical Reports Server (NTRS)

    Kayatin, Matthew J.; Perry, Jay L.

    2017-01-01

    Traditional gas-phase trace contaminant control adsorption process flow is constrained as required to maintain high contaminant single-pass adsorption efficiency. Specifically, the bed superficial velocity is controlled to limit the adsorption mass-transfer zone length relative to the physical adsorption bed; this is aided by traditional high-aspect ratio bed design. Through operation in this manner, most contaminants, including those with relatively high potential energy are readily adsorbed. A consequence of this operational approach, however, is a limited available operational flow margin. By considering a paradigm shift in adsorption architecture design and operations, in which flows of high superficial velocity are treated by low-aspect ratio sorbent beds, the range of well-adsorbed contaminants becomes limited, but the process flow is increased such that contaminant leaks or emerging contaminants of interest may be effectively controlled. To this end, the high velocity, low aspect ratio (HVLA) adsorption process architecture was demonstrated against a trace contaminant load representative of the International Space Station atmosphere. Two HVLA concept packaging designs (linear flow and radial flow) were tested. The performance of each design was evaluated and compared against computer simulation. Utilizing the HVLA process, long and sustained control of heavy organic contaminants was demonstrated.

  1. The Influence of Multi-Scale Stratal Architecture on Multi-Phase Flow

    NASA Astrophysics Data System (ADS)

    Soltanian, M.; Gershenzon, N. I.; Ritzi, R. W.; Dominic, D.; Ramanathan, R.

    2012-12-01

    Geological heterogeneity affects flow and transport in porous media, including the migration and entrapment patterns of oil, and efforts for enhanced oil recovery. Such effects are only understood through their relation to a hierarchy of reservoir heterogeneities over a range of scales. Recent work on modern rivers and ancient sediments has led to a conceptual model of the hierarchy of fluvial forms within channel-belts of gravelly braided rivers, and a quantitative model for the corresponding scales of heterogeneity within the stratal architecture (e.g. [Lunt et al (2004) Sedimentology, 51 (3), 377]). In related work, a three-dimensional digital model was developed which represents these scales of fluvial architecture, the associated spatial distribution of permeability, and the connectivity of high-permeability pathways across the different scales of the stratal hierarchy [Ramanathan et al, (2010) Water Resour. Res., 46, W04515; Guin et al, (2010) Water Resour. Res., 46, W04516]. In the present work we numerically examine three-phase fluid flow (water-oil-gas) incorporating the multi-scale model for reservoir heterogeneity spanning the scales from 10^-1 to 10^3 meters. Comparison with results of flow in a reservoir with homogeneous permeability is made showing essentially different flow dynamics.

  2. Doing It Right: 366 answers to computing questions you didn't know you had

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herring, Stuart Davis

    Slides include information on history: version control, version control: branches, version control: Git, releases, requirements, readability, readability control flow, global variables, architecture, architecture redundancy, processes, input/output, unix, etcetera.

  3. Report on architecture description for the INFLO prototype.

    DOT National Transportation Integrated Search

    2014-01-01

    This report documents the Architecture Description for the implementation of the Intelligent Network Flow Optimization (INFLO) Prototype bundle within the Dynamic Mobility Applications (DMA) portion of the Connected Vehicle Program. The intent is to ...

  4. Drafting. Advanced Print Reading--Electrical.

    ERIC Educational Resources Information Center

    Oregon State Dept. of Education, Salem.

    This document is a workbook for drafting students learning advanced print reading for electricity applications. The workbook contains seven units covering the following material: architectural working drawings; architectural symbols and dimensions; basic architectural electrical symbols; wiring symbols; riser diagrams; schematic diagrams; and…

  5. Modelling of hydrothermal fluid flow and structural architecture in an extensional basin, Ngakuru Graben, Taupo Rift, New Zealand

    NASA Astrophysics Data System (ADS)

    Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.

    2018-05-01

    Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.

  6. Determining skeletal muscle architecture with Laplacian simulations: a comparison with diffusion tensor imaging.

    PubMed

    Handsfield, Geoffrey G; Bolsterlee, Bart; Inouye, Joshua M; Herbert, Robert D; Besier, Thor F; Fernandez, Justin W

    2017-12-01

    Determination of skeletal muscle architecture is important for accurately modeling muscle behavior. Current methods for 3D muscle architecture determination can be costly and time-consuming, making them prohibitive for clinical or modeling applications. Computational approaches such as Laplacian flow simulations can estimate muscle fascicle orientation based on muscle shape and aponeurosis location. The accuracy of this approach is unknown, however, since it has not been validated against other standards for muscle architecture determination. In this study, muscle architectures from the Laplacian approach were compared to those determined from diffusion tensor imaging in eight adult medial gastrocnemius muscles. The datasets were subdivided into training and validation sets, and computational fluid dynamics software was used to conduct Laplacian simulations. In training sets, inputs of muscle geometry, aponeurosis location, and geometric flow guides resulted in good agreement between methods. Application of the method to validation sets showed no significant differences in pennation angle (mean difference [Formula: see text] or fascicle length (mean difference 0.9 mm). Laplacian simulation was thus effective at predicting gastrocnemius muscle architectures in healthy volunteers using imaging-derived muscle shape and aponeurosis locations. This method may serve as a tool for determining muscle architecture in silico and as a complement to other approaches.

  7. The architecture of tholeiitic lava flows in the Neogene flood basalt piles of eastern Iceland: constraints on the mode of emplacemement

    NASA Astrophysics Data System (ADS)

    Oskarsson, B. V.; Riishuus, M. S.

    2012-12-01

    Tholeiites comprise 50-70% of the Neogene lava piles of eastern Iceland and have been described largely as flood basalts erupted from fissures (Walker, 1958). This study incorporates lava piles found in the Greater Reydarfjördur area and emprises the large-scale architecture of selected flows and flow groups, their internal structure and textures with the intention of assessing their mode of emplacement. A range of lava morphologies have been described and include: simple (tabular) flows with a'a and rubbly flow tops, simple flows with pahoehoe crust and compound pahoehoe flows, with simple flows being most common. Special attention is given here to the still poorly understood simple flows, which are characterized by extensive sheet lobes with individual sheet lengths frequently exceeding 2 km and reaching thicknesses of ~40 m (common aspect ratios <0.01). The sheets in individual flow fields are emplaced side by side with an overlapping contact and are free of tubes. Their internal structure generally constitutes an upper vesicular crust with no or minor occurrences of horizontal vesicle zones, a poorly vesicular core and a thin basal vesicular zone. The normalized core/crust thickness ratios resemble modern compound pahoehoe flows in many instances (0.4-0.7), but with the thicker flows reaching ratios of 0.9. Flow crusts are either pahoehoe, rubbly or scoriaceous with torn and partially welded scoria and clinker. Frequently, any given flow morphology is repeated in sequences of three to four flows with direct contacts. Preliminary assessments suggest that simple flows are the product of high and sustained effusion rates from seemingly short-lived fissures. Simple flows with a'a flow tops may comprise the annealed emplacement mode of sheet flows and channeled a'a, in which the flow propagated as a single unit, whereas the brecciated flow top formed by continuous tearing and brecciation as occurs in channeled lava flowing at high velocity. The absence of a clinkery basal zone supports a fast moving flow front that inhibited the accumulation of clinker at the base as well as formation of a rigid crust. Pahoehoe crust and contrasting morphologies within simple flows may represent variation of flowage within the sheets controlled by conditions at the vent or topography. With one eruption soon followed by the next, the lack of tubes in the existing lava field and high effusion rates may have favored stacking of sheets instead of reactivation of the previous lava flow field. This has implications in evaluating the size and environmental impact of these eruptions. Eruptions of this kind have not yet been observed in modern times, and thus are significant for models of crustal accretion in Iceland and other flood basalt provinces. Reference: Walker, G. P. L., 1958, Geology of the Reydarfjördur area, Eastern Iceland, Quarterly Journal of the Geological Society, 114, 367-391.

  8. Documentation of South Dakota's ITS/CVO data architecture

    DOT National Transportation Integrated Search

    1999-09-15

    This report documents the Intelligent Transportation Systems/Commercial Vehicle Operations (ITS/CVO) data architecture for the State of South Dakota. It details the current state of affairs in terms of CVO business areas, processes, data flow linkage...

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  10. New paradigms in internal architecture design and freeform fabrication of tissue engineering porous scaffolds.

    PubMed

    Yoo, Dongjin

    2012-07-01

    Advanced additive manufacture (AM) techniques are now being developed to fabricate scaffolds with controlled internal pore architectures in the field of tissue engineering. In general, these techniques use a hybrid method which combines computer-aided design (CAD) with computer-aided manufacturing (CAM) tools to design and fabricate complicated three-dimensional (3D) scaffold models. The mathematical descriptions of micro-architectures along with the macro-structures of the 3D scaffold models are limited by current CAD technologies as well as by the difficulty of transferring the designed digital models to standard formats for fabrication. To overcome these difficulties, we have developed an efficient internal pore architecture design system based on triply periodic minimal surface (TPMS) unit cell libraries and associated computational methods to assemble TPMS unit cells into an entire scaffold model. In addition, we have developed a process planning technique based on TPMS internal architecture pattern of unit cells to generate tool paths for freeform fabrication of tissue engineering porous scaffolds. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Power optimization of digital baseband WCDMA receiver components on algorithmic and architectural level

    NASA Astrophysics Data System (ADS)

    Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.

    2008-05-01

    High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.

  12. What Did It Look Like Then? Eighteenth Century Architectural Elements.

    ERIC Educational Resources Information Center

    Taylor, Joshua, Jr.

    Designed primarily for use in the intermediate grades, the teaching unit provides 11 lessons and related activities for teaching students to look at colonial architectural elements as a means of learning about 18th century lifestyles. Although the unit relies upon resources available in Alexandria and Arlington, Virginia, other 18th century cities…

  13. Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2006-01-01

    Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.

  14. Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.

  15. Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.

    2016-03-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.

  16. Architected squirt-flow materials for energy dissipation

    NASA Astrophysics Data System (ADS)

    Cohen, Tal; Kurzeja, Patrick; Bertoldi, Katia

    2017-12-01

    In the present study we explore material architectures that lead to enhanced dissipation properties by taking advantage of squirt-flow - a local flow mechanism triggered by heterogeneities at the pore level. While squirt-flow is a known dominant source of dissipation and seismic attenuation in fluid saturated geological materials, we study its untapped potential to be incorporated in highly deformable elastic materials with embedded fluid-filled cavities for future engineering applications. An analytical investigation, that isolates the squirt-flow mechanism from other potential dissipation mechanisms and considers an idealized setting, predicts high theoretical levels of dissipation achievable by squirt-flow and establishes a set of guidelines for optimal dissipation design. Particular architectures are then investigated via numerical simulations showing that a careful design of the internal voids can lead to an increase of dissipation levels by an order of magnitude, compared with equivalent homogeneous void distributions. Therefore, we suggest squirt-flow as a promising mechanism to be incorporated in future architected materials to effectively and reversibly dissipate energy.

  17. Simulating Hydrologic Flow and Reactive Transport with PFLOTRAN and PETSc on Emerging Fine-Grained Parallel Computer Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Rupp, K.; Smith, B. F.; Brown, J.; Knepley, M.; Zhang, H.; Adams, M.; Hammond, G. E.

    2017-12-01

    As the high-performance computing community pushes towards the exascale horizon, power and heat considerations have driven the increasing importance and prevalence of fine-grained parallelism in new computer architectures. High-performance computing centers have become increasingly reliant on GPGPU accelerators and "manycore" processors such as the Intel Xeon Phi line, and 512-bit SIMD registers have even been introduced in the latest generation of Intel's mainstream Xeon server processors. The high degree of fine-grained parallelism and more complicated memory hierarchy considerations of such "manycore" processors present several challenges to existing scientific software. Here, we consider how the massively parallel, open-source hydrologic flow and reactive transport code PFLOTRAN - and the underlying Portable, Extensible Toolkit for Scientific Computation (PETSc) library on which it is built - can best take advantage of such architectures. We will discuss some key features of these novel architectures and our code optimizations and algorithmic developments targeted at them, and present experiences drawn from working with a wide range of PFLOTRAN benchmark problems on these architectures.

  18. Sun/earth: alternative energy design for architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, R.L.

    1983-01-01

    A survey of architecture and its relation to the natural environment is presented. A holistic design approach is presented for use in design and construction that reduces inflation, creates a more healthful and vitalizing environment, deploys capital more effectively, increases savings in residential and commercial architecture and construction, and increases cash flow by reducing money spent on utilities. Holistic design also creates a cohesive urban texture.

  19. 38 CFR 36.4361 - Acceptable ownership arrangements and documentation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... condominium, including building types, architectural style and the size of the units for those phases of the..., building types, architectural style and size of the units, etc. of these phases. However, the minimum... elements. (See § 36.4864(a)(6).) (Authority: 38 U.S.C. 3703(c)(1), 3710(a)(6)) (The Office of Management...

  20. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  1. Implementation Recommendations for MOSAIC: A Workflow Architecture for Analytic Enrichment. Analysis and Recommendations for the Implementation of a Cohesive Method for Orchestrating Analytics in a Distributed Model

    DTIC Science & Technology

    2011-02-01

    Process Architecture Technology Analysis: Executive .............................................. 15 UIMA as Executive...44 A.4: Flow Code in UIMA ......................................................................................................... 46... UIMA ................................................................................................................................ 57 E.2

  2. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Identification of emergent off-nominal operational requirements during conceptual architecting of the more electric aircraft

    NASA Astrophysics Data System (ADS)

    Armstrong, Michael James

    Increases in power demands and changes in the design practices of overall equipment manufacturers has led to a new paradigm in vehicle systems definition. The development of unique power systems architectures is of increasing importance to overall platform feasibility and must be pursued early in the aircraft design process. Many vehicle systems architecture trades must be conducted concurrent to platform definition. With an increased complexity introduced during conceptual design, accurate predictions of unit level sizing requirements must be made. Architecture specific emergent requirements must be identified which arise due to the complex integrated effect of unit behaviors. Off-nominal operating scenarios present sizing critical requirements to the aircraft vehicle systems. These requirements are architecture specific and emergent. Standard heuristically defined failure mitigation is sufficient for sizing traditional and evolutionary architectures. However, architecture concepts which vary significantly in terms of structure and composition require that unique failure mitigation strategies be defined for accurate estimations of unit level requirements. Identifying of these off-nominal emergent operational requirements require extensions to traditional safety and reliability tools and the systematic identification of optimal performance degradation strategies. Discrete operational constraints posed by traditional Functional Hazard Assessment (FHA) are replaced by continuous relationships between function loss and operational hazard. These relationships pose the objective function for hazard minimization. Load shedding optimization is performed for all statistically significant failures by varying the allocation of functional capability throughout the vehicle systems architecture. Expressing hazards, and thereby, reliability requirements as continuous relationships with the magnitude and duration of functional failure requires augmentations to the traditional means for system safety assessment (SSA). The traditional two state and discrete system reliability assessment proves insufficient. Reliability is, therefore, handled in an analog fashion: as a function of magnitude of failure and failure duration. A series of metrics are introduced which characterize system performance in terms of analog hazard probabilities. These include analog and cumulative system and functional risk, hazard correlation, and extensions to the traditional component importance metrics. Continuous FHA, load shedding optimization, and analog SSA constitute the SONOMA process (Systematic Off-Nominal Requirements Analysis). Analog system safety metrics inform both architecture optimization (changes in unit level capability and reliability) and architecture augmentation (changes in architecture structure and composition). This process was applied for two vehicle systems concepts (conventional and 'more-electric') in terms of loss/hazard relationships with varying degrees of fidelity. Application of this process shows that the traditional assumptions regarding the structure of the function loss vs. hazard relationship apply undue design bias to functions and components during exploratory design. This bias is illustrated in terms of inaccurate estimations of the system and function level risk and unit level importance. It was also shown that off-nominal emergent requirements must be defined specific to each architecture concept. Quantitative comparisons of architecture specific off-nominal performance were obtained which provide evidence to the need for accurate definition of load shedding strategies during architecture exploratory design. Formally expressing performance degradation strategies in terms of the minimization of a continuous hazard space enhances the system architects ability to accurately predict sizing critical emergent requirements concurrent to architecture definition. Furthermore, the methods and frameworks generated here provide a structured and flexible means for eliciting these architecture specific requirements during the performance of architecture trades.

  4. Immersive Environments: Using Flow and Sound to Blur Inhabitant and Surroundings

    NASA Astrophysics Data System (ADS)

    Laverty, Luke

    Following in the footsteps of motif-reviving, aesthetically-focused Postmodern and deconstructivist architecture, purely computer-generated formalist contemporary architecture (i.e. blobitecture) has been reduced to vast, empty sculptural, and therefore, purely ocularcentric gestures for their own sake. Taking precedent over the deliberate relation to the people inhabiting them beyond scaleless visual stimulation, the forms become separated from and hostile toward their inhabitants; a boundary appears. This thesis calls for a reintroduction of human-centered design beyond Modern functionalism and ergonomics and Postmodern form and metaphor into architecture by exploring ecological psychology (specifically how one becomes attached to objects) and phenomenology (specifically sound) in an attempt to reach a contemporary human scale using the technology of today: the physiological mind. Psychologist Dr. Mihaly Csikszentmihalyi's concept of flow---when one becomes so mentally immersed within the current activity and immediate surroundings that the boundary between inhabitant and environment becomes transparent through a form of trance---is the embodiment of this thesis' goal, but it is limited to only specific moments throughout the day and typically studied without regard to the environment. Physiologically, the area within the brain---the medial prefrontal cortex---stimulated during flow experiences is also stimulated by the synthesis of sound, memory, and emotion. By exploiting sound (a sense not typically focused on within phenomenology) as a form of constant nuance within the everyday productive dissonance, the engagement and complete concentration on one's own interpretation of this sensory input affords flow experiences and, therefore, a blurred boundary with one's environment. This thesis aims to answer the question: How does the built environment embody flow? The above concept will be illustrated within a ubiquitous building type---the everyday housing tower---in the form of a live-work vertical artist commune in New York City---the antithesis of intimate, human architectural environments---coupled with the design of a sound sensory experiential walk through the surrounding blurred neighborhood boundaries in the attempt to exploit and create an environment one becomes absorbed within and feels comfortable enough with which to experience flow. To do so, the characteristics of flow lead to the capturing of the senses, interaction, and flexibility. This thesis will explore and exploit how one perceives, interacts with, and becomes attached to when confronted with a space or artifact; reintroducing the humanity into contemporary architecture.

  5. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  6. Dual-responsive and Multi-functional Plasmonic Hydrogel Valves and Biomimetic Architectures Formed with Hydrogel and Gold Nanocolloids

    PubMed Central

    Song, Ji Eun; Cho, Eun Chul

    2016-01-01

    We present a straightforward approach with high moldability for producing dual-responsive and multi-functional plasmonic hydrogel valves and biomimetic architectures that reversibly change volumes and colors in response to temperature and ion variations. Heating of a mixture of hybrid colloids (gold nanoparticles assembled on a hydrogel colloid) and hydrogel colloids rapidly induces (within 30 min) the formation of hydrogel architectures resembling mold shapes (cylinder, fish, butterfly). The biomimetic fish and butterfly display reversible changes in volumes and colors with variations of temperature and ionic conditions in aqueous solutions. The cylindrical plasmonic valves installed in flow tubes rapidly control water flow rate in on-off manner by responding to these stimuli. They also report these changes in terms of their colors. Therefore, the approach presented here might be helpful in developing new class of biomimetic and flow control systems where liquid conditions should be visually notified (e.g., glucose or ion concentration changes). PMID:27703195

  7. The Airspace Concepts Evaluation System Architecture and System Plant

    NASA Technical Reports Server (NTRS)

    Windhorst, Robert; Meyn, Larry; Manikonda, Vikram; Carlos, Patrick; Capozzi, Brian

    2006-01-01

    The Airspace Concepts Evaluation System is a simulation of the National Airspace System. It includes models of flights, airports, airspaces, air traffic controls, traffic flow managements, and airline operation centers operating throughout the United States. It is used to predict system delays in response to future capacity and demand scenarios and perform benefits assessments of current and future airspace technologies and operational concepts. Facilitation of these studies requires that the simulation architecture supports plug and play of different air traffic control, traffic flow management, and airline operation center models and multi-fidelity modeling of flights, airports, and airspaces. The simulation is divided into two parts that are named, borrowing from classical control theory terminology, control and plant. The control consists of air traffic control, traffic flow management, and airline operation center models, and the plant consists of flight, airport, and airspace models. The plant can run open loop, in the absence of the control. However, undesired affects, such as conflicts and over congestions in the airspaces and airports, can occur. Different controls are applied, "plug and played", to the plant. A particular control is evaluated by analyzing how well it managed conflicts and congestions. Furthermore, the terminal area plants consist of models of airports and terminal airspaces. Each model consists of a set of nodes and links which are connected by the user to form a network. Nodes model runways, fixes, taxi intersections, gates, and/or other points of interest, and links model taxiways, departure paths, and arrival paths. Metering, flow distribution, and sequencing functions can be applied at nodes. Different fidelity model of how a flight transits are can be used by links. The fidelity of the model can be adjusted by the user by either changing the complexity of the node/link network-or the way that the link models how the flights transit from one node to the other.

  8. Variations in eruptive style and depositional processes of Neoproterozoic terrestrial volcano-sedimentary successions in the Hamid area, North Eastern Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Khalaf, Ezz El Din Abdel Hakim

    2013-07-01

    Two contrasting Neoproterozoic volcano-sedimentary successions of ca. 600 m thickness were recognized in the Hamid area, Northeastern Desert, Egypt. A lower Hamid succession consists of alluvial sediments, coherent lava flows, pyroclastic fall and flow deposits. An upper Hamid succession includes deposits from pyroclastic density currents, sills, and dykes. Sedimentological studies at different scales in the Hamid area show a very complex interaction of fluvial, eruptive, and gravitational processes in time and space and thus provided meaningful insights into the evolution of the rift sedimentary environments and the identification of different stages of effusive activity, explosive activity, and relative quiescence, determining syn-eruptive and inter-eruptive rock units. The volcano-sedimentary deposits of the study area can be ascribed to 14 facies and 7 facies associations: (1) basin-border alluvial fan, (2) mixed sandy fluvial braid plain, (3) bed-load-dominated ephemeral lake, (4) lava flows and volcaniclastics, (5) pyroclastic fall deposits, (6) phreatomagmatic volcanic deposits, and (7) pyroclastic density current deposits. These systems are in part coeval and in part succeed each other, forming five phases of basin evolution: (i) an opening phase including alluvial fan and valley flooding together with a lacustrine period, (ii) a phase of effusive and explosive volcanism (pulsatory phase), (iii) a phase of predominant explosive and deposition from base surges (collapsing phase), and (iv) a phase of caldera eruption and ignimbrite-forming processes (climactic phase). The facies architectures record a change in volcanic activity from mainly phreatomagmatic eruptions, producing large volumes of lava flows and pyroclastics (pulsatory and collapsing phase), to highly explosive, pumice-rich plinian-type pyroclastic density current deposits (climactic phase). Hamid area is a small-volume volcano, however, its magma compositions, eruption styles, and inter-eruptive breaks suggest, that it closely resembles a volcanic architecture commonly associated with large, composite volcanoes.

  9. Architecture for improved mass transport and system performance in redox flow batteries

    NASA Astrophysics Data System (ADS)

    Houser, Jacob; Pezeshki, Alan; Clement, Jason T.; Aaron, Douglas; Mench, Matthew M.

    2017-05-01

    In this work, electrochemical performance and parasitic losses are combined in an overall system-level efficiency metric for a high performance, all-vanadium redox flow battery. It was found that pressure drop and parasitic pumping losses are relatively negligible for high performance cells, i.e., those capable of operating at a high current density while at a low flow rate. Through this finding, the Equal Path Length (EPL) flow field architecture was proposed and evaluated. This design has superior mass transport characteristics in comparison with the standard serpentine and interdigitated designs at the expense of increased pressure drop. An Aspect Ratio (AR) design is discussed and evaluated, which demonstrates decreased pressure drop compared to the EPL design, while maintaining similar electrochemical performance under most conditions. This AR design is capable of leading to improved system energy efficiency for flow batteries of all chemistries.

  10. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  11. A Student Experiment Method for Learning the Basics of Embedded Software Technologies Including Hardware/Software Co-design

    NASA Astrophysics Data System (ADS)

    Kambe, Hidetoshi; Mitsui, Hiroyasu; Endo, Satoshi; Koizumi, Hisao

    The applications of embedded system technologies have spread widely in various products, such as home appliances, cellular phones, automobiles, industrial machines and so on. Due to intensified competition, embedded software has expanded its role in realizing sophisticated functions, and new development methods like a hardware/software (HW/SW) co-design for uniting HW and SW development have been researched. The shortfall of embedded SW engineers was estimated to be approximately 99,000 in the year 2006, in Japan. Embedded SW engineers should understand HW technologies and system architecture design as well as SW technologies. However, a few universities offer this kind of education systematically. We propose a student experiment method for learning the basics of embedded system development, which includes a set of experiments for developing embedded SW, developing embedded HW and experiencing HW/SW co-design. The co-design experiment helps students learn about the basics of embedded system architecture design and the flow of designing actual HW and SW modules. We developed these experiments and evaluated them.

  12. Physicochemical and biological technologies for future exploration missions

    NASA Astrophysics Data System (ADS)

    Belz, S.; Buchert, M.; Bretschneider, J.; Nathanson, E.; Fasoulas, S.

    2014-08-01

    Life Support Systems (LSS) are essential for human spaceflight. They are the key element for humans to survive, to live and to work in space. Ambitious goals of human space exploration in the next 40 years like a permanently crewed surface habitat on Moon or a manned mission to Mars require technologies which allow for a reduction of system and resupply mass. Enhancements of existing technologies, new technological developments and synergetic components integration help to close the oxygen, water and carbon loops. In order to design the most efficient LSS architecture for a given mission scenario, it is important to follow a dedicated design process: definition of requirements, selection of candidate technologies, development of possible LSS architectures and characterisation of LSS architectures by system drivers and evaluation of the LSS architectures. This paper focuses on the approach of a synergetic integration of Polymer Electrolyte Membrane Fuel Cells (PEFC) and microalgae cultivated in photobioreactors (PBR). LSS architectures and their benefits for selected mission scenarios are demonstrated. Experiments on critical processes and interfaces were conducted and result in engineering models for a PEFC and PBR system which fulfil the requirements of a synergetic integrative environment. The PEFC system (about 1 kW) can be operated with cabin air enriched by stored or biologically generated oxygen instead of pure oxygen. This offers further advantages with regard to thermal control as high oxygen concentrations effect a dense heat production. The PBR system consists of an illuminated cultivation chamber (about 5 l), a nutrients supply and harvesting and analytics units. Especially the chamber enables a microgravity adapted cultivation of microalgae. However, the peripheral units still have to be adapted in order to allow for a continuous and automated cultivation and harvesting. These automation processes will be tested and evaluated by means of a parabolic flight experiment. Both engineering models are being specified in dimensions, components, mass and energy flows. They will serve as a platform for getting operational experience, reliability data and identifying technical problems before the next step is to be realized: in-orbit verification in a spaceflight experiment.

  13. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1997-08-12

    A data flow computer and method of computing are disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  14. Data flow machine for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1988-07-22

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information from an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ''fire'' signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  15. Data flow machine for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor G.

    1995-01-01

    A data flow computer which of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  16. Direct match data flow machine apparatus and process for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor Gerald

    1997-01-01

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  17. Direct match data flow memory for data driven computing

    DOEpatents

    Davidson, George S.; Grafe, Victor Gerald

    1997-01-01

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a "fire" signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

  18. Direct match data flow memory for data driven computing

    DOEpatents

    Davidson, G.S.; Grafe, V.G.

    1997-10-07

    A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output address; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status bit to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ``fire`` signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor. 11 figs.

  19. ITS system specification. Appendix C, data flows by function for ITS services

    DOT National Transportation Integrated Search

    1997-01-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. An architecture is a framework that defines how multiple ITS Components interrelate and contribute to the overall I...

  20. Software defined network architecture based research on load balancing strategy

    NASA Astrophysics Data System (ADS)

    You, Xiaoqian; Wu, Yang

    2018-05-01

    As a new type network architecture, software defined network has the key idea of separating the control place of the network from the transmission plane, to manage and control the network in a concentrated way; in addition, the network interface is opened on the control layer and the data layer, so as to achieve programmable control of the network. Considering that only the single shortest route is taken into the calculation of traditional network data flow transmission, and congestion and resource consumption caused by excessive load of link circuits are ignored, a link circuit load based flow media business QoS gurantee system is proposed in this article to divide the flow in the network into ordinary data flow and QoS flow. In this way, it supervises the link circuit load with the controller so as to calculate reasonable route rapidly and issue the flow table to the exchanger, to finish rapid data transmission. In addition, it establishes a simulation platform to acquire optimized result through simulation experiment.

  1. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  2. 26 CFR 1.924(a)-1T - Temporary regulations; definition of foreign trading gross receipts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) Engineering and architectural services—(1) In general. Foreign trading gross receipts of a FSC include gross receipts from engineering services (as described in paragraph (e)(5) of this section) or architectural... without the United States. (2) Services included. Engineering and architectural services include...

  3. 26 CFR 1.924(a)-1T - Temporary regulations; definition of foreign trading gross receipts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) Engineering and architectural services—(1) In general. Foreign trading gross receipts of a FSC include gross receipts from engineering services (as described in paragraph (e)(5) of this section) or architectural... without the United States. (2) Services included. Engineering and architectural services include...

  4. 26 CFR 1.924(a)-1T - Temporary regulations; definition of foreign trading gross receipts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) Engineering and architectural services—(1) In general. Foreign trading gross receipts of a FSC include gross receipts from engineering services (as described in paragraph (e)(5) of this section) or architectural... without the United States. (2) Services included. Engineering and architectural services include...

  5. 26 CFR 1.924(a)-1T - Temporary regulations; definition of foreign trading gross receipts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Engineering and architectural services—(1) In general. Foreign trading gross receipts of a FSC include gross receipts from engineering services (as described in paragraph (e)(5) of this section) or architectural... without the United States. (2) Services included. Engineering and architectural services include...

  6. Predefined three tier business intelligence architecture in healthcare enterprise.

    PubMed

    Wang, Meimei

    2013-04-01

    Business Intelligence (BI) has caused extensive concerns and widespread use in gathering, processing and analyzing data and providing enterprise users the methodology to make decisions. Different from traditional BI architecture, this paper proposes a new BI architecture, Top-Down Scalable BI architecture with defining mechanism for enterprise decision making solutions and aims at establishing a rapid, consistent, and scalable multiple applications on multiple platforms of BI mechanism. The two opposite information flows in our BI architecture offer the merits of having the high level of organizational prospects and making full use of the existing resources. We also introduced the avg-bed-waiting-time factor to evaluate hospital care capacity.

  7. How does the connectivity of open-framework conglomerates within multi-scale hierarchical fluvial architecture affect oil-sweep efficiency in waterflooding?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershenzon, Naum I.; Soltanian, Mohamad Reza; Ritzi, Robert W.

    Understanding multi-phase fluid flow and transport processes within aquifers, candidate reservoirs for CO 2 sequestration, and petroleum reservoirs requires understanding a diverse set of geologic properties of the aquifer or reservoir, over a wide range of spatial and temporal scales. We focus on multiphase flow dynamics with wetting (e.g., water) and non-wetting (e.g., gas or oil) fluids, with one invading another. This problem is of general interest in a number of fields and is illustrated here by considering the sweep efficiency of oil during a waterflood. Using a relatively fine-resolution grid throughout a relatively large domain in these simulations andmore » probing the results with advanced scientific visualization tools (Reservoir Visualization Analysis [RVA]/ ParaView software) promote a better understanding of how smaller-scale features affect the aggregate behavior at larger scales. We studied the effects on oil-sweep efficiency of the proportion, hierarchical organization, and connectivity of high-permeability open-framework conglomerate (OFC) cross-sets within the multi-scale stratal architecture found in fluvial deposits. We further analyzed oil production rate, water breakthrough time, and spatial and temporal distribution of residual oil saturation. As expected, the effective permeability of the reservoir exhibits large-scale anisotropy created by the organization of OFC cross-sets within unit bars, and the organization of unit bars within compound- bars. As a result, oil-sweep efficiency critically depends on the direction of the pressure gradient. However, contrary to expectations, the total amount of trapped oil due to the effect of capillary trapping does not depend on the magnitude of the pressure gradient within the examined range. Hence the pressure difference between production and injection wells does not affect sweep efficiency; although the spatial distribution of oil remaining in the reservoir depends on this value. Whether or not clusters of connected OFC span the domain affects only the absolute rate of oil production—not sweep efficiency.« less

  8. How does the connectivity of open-framework conglomerates within multi-scale hierarchical fluvial architecture affect oil-sweep efficiency in waterflooding?

    DOE PAGES

    Gershenzon, Naum I.; Soltanian, Mohamad Reza; Ritzi, Robert W.; ...

    2015-10-23

    Understanding multi-phase fluid flow and transport processes within aquifers, candidate reservoirs for CO 2 sequestration, and petroleum reservoirs requires understanding a diverse set of geologic properties of the aquifer or reservoir, over a wide range of spatial and temporal scales. We focus on multiphase flow dynamics with wetting (e.g., water) and non-wetting (e.g., gas or oil) fluids, with one invading another. This problem is of general interest in a number of fields and is illustrated here by considering the sweep efficiency of oil during a waterflood. Using a relatively fine-resolution grid throughout a relatively large domain in these simulations andmore » probing the results with advanced scientific visualization tools (Reservoir Visualization Analysis [RVA]/ ParaView software) promote a better understanding of how smaller-scale features affect the aggregate behavior at larger scales. We studied the effects on oil-sweep efficiency of the proportion, hierarchical organization, and connectivity of high-permeability open-framework conglomerate (OFC) cross-sets within the multi-scale stratal architecture found in fluvial deposits. We further analyzed oil production rate, water breakthrough time, and spatial and temporal distribution of residual oil saturation. As expected, the effective permeability of the reservoir exhibits large-scale anisotropy created by the organization of OFC cross-sets within unit bars, and the organization of unit bars within compound- bars. As a result, oil-sweep efficiency critically depends on the direction of the pressure gradient. However, contrary to expectations, the total amount of trapped oil due to the effect of capillary trapping does not depend on the magnitude of the pressure gradient within the examined range. Hence the pressure difference between production and injection wells does not affect sweep efficiency; although the spatial distribution of oil remaining in the reservoir depends on this value. Whether or not clusters of connected OFC span the domain affects only the absolute rate of oil production—not sweep efficiency.« less

  9. IR-drop analysis for validating power grids and standard cell architectures in sub-10nm node designs

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Wang, Chenchen; Zeng, Jia; Kye, Jongwook

    2017-03-01

    Since chip performance and power are highly dependent on the operating voltage, the robust power distribution network (PDN) is of utmost importance in designs to provide with the reliable voltage without voltage (IR)-drop. However, rapid increase of parasitic resistance and capacitance (RC) in interconnects makes IR-drop much worse with technology scaling. This paper shows various IR-drop analyses in sub 10nm designs. The major objectives are to validate standard cell architectures, where different sizes of power/ground and metal tracks are validated, and to validate PDN architecture, where types of power hook-up approaches are evaluated with IR-drop calculation. To estimate IR-drops in 10nm and below technologies, we first prepare physically routed designs given standard cell libraries, where we use open RISC RTL, synthesize the CPU, and apply placement & routing with process-design kits (PDK). Then, static and dynamic IR-drop flows are set up with commercial tools. Using the IR-drop flow, we compare standard cell architectures, and analysis impacts on performance, power, and area (PPA) with the previous technology-node designs. With this IR-drop flow, we can optimize the best PDN structure against IR-drops as well as types of standard cell library.

  10. Stability and performance tradeoffs in bi-lateral telemanipulation

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake

    1989-01-01

    Kinesthetic force feedback provides measurable increase in remote manipulation system performance. Intensive computation time requirements or operation under conditions of time delay can cause serious stability problems in control-system design. Here, a simplified linear analysis of this stability problem is presented for the forward-flow generalized architecture, applying the hybrid two-port representation to express the loop gain of the traditional master-slave architecture, which can be subjected to similar analysis. The hybrid two-port representation is also used to express the effects on the fidelity of manipulation or feel of one design approach used to stabilize the forward-flow architecture. The results suggest that, when local force feedback at the slave side is used to reduce manipulator stability problems, a price is paid in terms of telemanipulation fidelity.

  11. Sedimentary architecture and depositional environment of Kudat Formation, Sabah, Malaysia

    NASA Astrophysics Data System (ADS)

    Ghaheri, Samira; Suhaili, Mohd; Sapari, Nasiman; Momeni, Mohammadsadegh

    2017-12-01

    Kudat Formation originated from deep marine environment. Three lithofacies association of deep marine turbidity channel was discovered in three Members of the Kudat Formation in Kudat Peninsula, Sabah, Malaysia. Turbidite and deep marine architecture elements was described based on detailed sedimentological studies. Four architecture elements were identified based on each facies association and their lithology properties and character: inner external levee that was formed by turbidity flows spill out from their confinement of channel belt; Lobes sheet that was formed during downslope debris flows associated with levee; Channel fill which sediments deposited from high to low density currents with different value of sediment concentration; and overbank terrace which was formed by rapid suspension sedimentation. The depositional environment of Kudat Formation is shelf to deep marine fan.

  12. Fracturing And Liquid CONvection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-02-29

    FALCON has been developed to enable simulation of the tightly coupled fluid-rock behavior in hydrothermal and engineered geothermal system (EGS) reservoirs, targeting the dynamics of fracture stimulation, fluid flow, rock deformation, and heat transport in a single integrated code, with the ultimate goal of providing a tool that can be used to test the viability of EGS in the United States and worldwide. Reliable reservoir performance predictions of EGS systems require accurate and robust modeling for the coupled thermal-hydrological-mechanical processes. Conventionally, these types of problems are solved using operator-splitting methods, usually by coupling a subsurface flow and heat transport simulatormore » with a solid mechanics simulator via input files. FALCON eliminates the need for using operator-splitting methods to simulate these systems, and the scalability of the underlying MOOSE architecture allows for simulating these tightly coupled processes at the reservoir scale, allowing for examination of the system as a whole (something the operator-splitting methodologies generally cannot do).« less

  13. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  14. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  15. Topological structure and mechanics of glassy polymer networks.

    PubMed

    Elder, Robert M; Sirk, Timothy W

    2017-11-22

    The influence of chain-level network architecture (i.e., topology) on mechanics was explored for unentangled polymer networks using a blend of coarse-grained molecular simulations and graph-theoretic concepts. A simple extension of the Watts-Strogatz model is proposed to control the graph properties of the network such that the corresponding physical properties can be studied with simulations. The architecture of polymer networks assembled with a dynamic curing approach were compared with the extended Watts-Strogatz model, and found to agree surprisingly well. The final cured structures of the dynamically-assembled networks were nearly an intermediate between lattice and random connections due to restrictions imposed by the finite length of the chains. Further, the uni-axial stress response, character of the bond breaking, and non-affine displacements of fully-cured glassy networks were analyzed as a function of the degree of disorder in the network architecture. It is shown that the architecture strongly affects the network stability, flow stress, onset of bond breaking, and ultimate stress while leaving the modulus and yield point nearly unchanged. The results show that internal restrictions imposed by the network architecture alter the chain-level response through changes to the crosslink dynamics in the flow regime and through the degree of coordinated chain failure at the ultimate stress. The properties considered here are shown to be sensitive to even incremental changes to the architecture and, therefore, the overall network architecture, beyond simple defects, is predicted to be a meaningful physical parameter in the mechanics of glassy polymer networks.

  16. Designing a low cost bedside workstation for intensive care units.

    PubMed Central

    Michel, A.; Zörb, L.; Dudeck, J.

    1996-01-01

    The paper describes the design and implementation of a software architecture for a low cost bedside workstation for intensive care units. The development is fully integrated into the information infrastructure of the existing hospital information system (HIS) at the University Hospital of Giessen. It provides cost efficient and reliable access for data entry and review from the HIS database from within patient rooms, even in very space limited environments. The architecture further supports automatical data input from medical devices. First results from three different intensive care units are reported. PMID:8947771

  17. Standby Power Management Architecture for Deep-Submicron Systems

    DTIC Science & Technology

    2006-05-19

    Driver 61 5.1 Quark PicoNode System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Power Domain Architecture... Quark system protocol stack. . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.2 Quark system block diagram...the implementation of the chip using an industry-standard place and route design flow. Lastly some measurements from the chip are presented. 5.1 Quark

  18. MYSEA: The Monterey Security Architecture

    DTIC Science & Technology

    2009-01-01

    Security and Protection, Organization and Design General Terms: Design; Security Keywords: access controls, authentication, information flow controls...Applicable environments include: mil- itary coalitions, agencies and organizations responding to security emergencies, and mandated sharing in business ...network architecture affords users the abil- ity to securely access information across networks at dif- ferent classifications using standardized

  19. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  20. Space station needs, attributes and architectural options study

    NASA Technical Reports Server (NTRS)

    1983-01-01

    All the candidate Technology Development missions investigated during the space station needs, attributes, and architectural options study are described. All the mission data forms plus additional information such as, cost, drawings, functional flows, etc., generated in support of these mission is included with a computer generated mission data form.

  1. Managing Parallelism and Resources in Scientific Dataflow Programs

    DTIC Science & Technology

    1990-03-01

    1983. [52] K. Hiraki , K. Nishida, S. Sekiguchi, and T. Shimada. Maintainence architecture and its LSI implementation of a dataflow computer with a... Hiraki , and K. Nishida. An architecture of a data flow machine and its evaluation. In Proceedings of CompCon 84, pages 486-490. IEEE, 1984. [84] N

  2. Le cône sous-marin du Nil et son réseau de chenaux profonds : nouveaux résultats (campagne Fanil)The Nile Cone and its channel system: new results after the Fanil cruise

    NASA Astrophysics Data System (ADS)

    Bellaiche, Gilbert; Loncke, Lies; Gaullier, Virginie; Mascle, Jean; Courp, Thierry; Moreau, Alain; Radan, Silviu; Sardou, Olivier

    2001-10-01

    The meandrous leveed channels of the Nile Cone show clear evidence of avulsions. Their sedimentary architecture is founded on numerous stacked lens-shaped acoustic units. In the areas of the distal fan, lobe deposits are apparent from multichannel imagery. Huge debris flow deposits, sometimes associated with pockmarks, are recognized. Mud volcanoes and gas seeping are closely associated with faulting. In the East, a very long north-trending channel, originating from the Egyptian coast, merges with a network of channels, very probably originating from the Levantine coasts. Both networks outlet in the sedimentary basin located south of Cyprus.

  3. Geohydrologic Framework of the Edwards and Trinity Aquifers, South-Central Texas

    USGS Publications Warehouse

    Blome, Charles D.; Faith, Jason R.; Ozuna, George B.

    2007-01-01

    This five-year USGS project, funded by the National Cooperative Geologic Mapping Program, is using multidisciplinary approaches to reveal the surface and subsurface geologic architecture of two important Texas aquifers: (1) the Edwards aquifer that extends from south of Austin to west of San Antonio and (2) the southern part of the Trinity aquifer in the Texas Hill Country west and south of Austin. The project's principal areas of research include: Geologic Mapping, Geophysical Surveys, Geochronology, Three-dimensional Modeling, and Noble Gas Geochemistry. The Edwards aquifer is one of the most productive carbonate aquifers in the United States. It also has been designated a sole source aquifer by the U.S. Environmental Protection Agency and is the primary source of water for San Antonio, America's eighth largest city. The Trinity aquifer forms the catchment area for the Edwards aquifer and it intercepts some surface flow above the Edwards recharge zone. The Trinity may also contribute to the Edwards water budget by subsurface flow across formation boundaries at considerable depths. Dissolution, karst development, and faulting and fracturing in both aquifers directly control aquifer geometry by compartmentalizing the aquifer and creating unique ground-water flow paths.

  4. Predicting permeability of regular tissue engineering scaffolds: scaling analysis of pore architecture, scaffold length, and fluid flow rate effects.

    PubMed

    Rahbari, A; Montazerian, H; Davoodi, E; Homayoonfar, S

    2017-02-01

    The main aim of this research is to numerically obtain the permeability coefficient in the cylindrical scaffolds. For this purpose, a mathematical analysis was performed to derive an equation for desired porosity in terms of morphological parameters. Then, the considered cylindrical geometries were modeled and the permeability coefficient was calculated according to the velocity and pressure drop values based on the Darcy's law. In order to validate the accuracy of the present numerical solution, the obtained permeability coefficient was compared with the published experimental data. It was observed that this model can predict permeability with the utmost accuracy. Then, the effect of geometrical parameters including porosity, scaffold pore structure, unit cell size, and length of the scaffolds as well as entrance mass flow rate on the permeability of porous structures was studied. Furthermore, a parametric study with scaling laws analysis of sample length and mass flow rate effects on the permeability showed good fit to the obtained data. It can be concluded that the sensitivity of permeability is more noticeable at higher porosities. The present approach can be used to characterize and optimize the scaffold microstructure due to the necessity of cell growth and transferring considerations.

  5. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  6. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  7. Wealth inequality: The physics basis

    NASA Astrophysics Data System (ADS)

    Bejan, A.; Errera, M. R.

    2017-03-01

    "Inequality" is a common observation about us, as members of society. In this article, we unify physics with economics by showing that the distribution of wealth is related proportionally to the movement of all the streams of a live society. The hierarchical distribution of wealth on the earth happens naturally. Hierarchy is unavoidable, with staying power, and difficult to efface. We illustrate this with two architectures, river basins and the movement of freight. The physical flow architecture that emerges is hierarchical on the surface of the earth and in everything that flows inside the live human bodies, the movement of humans and their belongings, and the engines that drive the movement. The nonuniform distribution of wealth becomes more accentuated as the economy becomes more developed, i.e., as its flow architecture becomes more complex for the purpose of covering smaller and smaller interstices of the overall (fixed) territory. It takes a relatively modest complexity for the nonuniformity in the distribution of wealth to be evident. This theory also predicts the Lorenz-type distribution of income inequality, which was adopted empirically for a century.

  8. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  9. A heterogeneous hierarchical architecture for real-time computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skroch, D.A.; Fornaro, R.J.

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  10. A Communication Architecture for an Advanced Extravehicular Mobile Unit

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This document describes the communication architecture for the Power, Avionics and Software (PAS) 1.0 subsystem for the Advanced Extravehicular Mobility Unit (AEMU). The following systems are described in detail: Caution Warning and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS subsystem being developed at Glenn Research Center (GRC).

  11. Architectural Heritage: An Experiment in Montreal's Schools.

    ERIC Educational Resources Information Center

    Leveille, Chantal

    1982-01-01

    A museum program in Montreal encourages elementary and secondary school students to examine their surroundings and neighborhoods. Units focus on stained glass windows, houses, history of Montreal, the neighborhood, and architectural heritage. (KC)

  12. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants

    PubMed Central

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-01-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier–Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. PMID:24664988

  13. Operational Concepts for a Generic Space Exploration Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Vaden, Karl R.; Jones, Robert E.; Roberts, Anthony M.

    2015-01-01

    This document is one of three. It describes the Operational Concept (OpsCon) for a generic space exploration communication architecture. The purpose of this particular document is to identify communication flows and data types. Two other documents accompany this document, a security policy profile and a communication architecture document. The operational concepts should be read first followed by the security policy profile and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes: subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.

  14. Deciphering structural and temporal interplays during the architectural development of mango trees.

    PubMed

    Dambreville, Anaëlle; Lauri, Pierre-Éric; Trottier, Catherine; Guédon, Yann; Normand, Frédéric

    2013-05-01

    Plant architecture is commonly defined by the adjacency of organs within the structure and their properties. Few studies consider the effect of endogenous temporal factors, namely phenological factors, on the establishment of plant architecture. This study hypothesized that, in addition to the effect of environmental factors, the observed plant architecture results from both endogenous structural and temporal components, and their interplays. Mango tree, which is characterized by strong phenological asynchronisms within and between trees and by repeated vegetative and reproductive flushes during a growing cycle, was chosen as a plant model. During two consecutive growing cycles, this study described vegetative and reproductive development of 20 trees submitted to the same environmental conditions. Four mango cultivars were considered to assess possible cultivar-specific patterns. Integrative vegetative and reproductive development models incorporating generalized linear models as components were built. These models described the occurrence, intensity, and timing of vegetative and reproductive development at the growth unit scale. This study showed significant interplays between structural and temporal components of plant architectural development at two temporal scales. Within a growing cycle, earliness of bud burst was highly and positively related to earliness of vegetative development and flowering. Between growing cycles, flowering growth units delayed vegetative development compared to growth units that did not flower. These interplays explained how vegetative and reproductive phenological asynchronisms within and between trees were generated and maintained. It is suggested that causation networks involving structural and temporal components may give rise to contrasted tree architectures.

  15. Genomics of local adaptation with gene flow.

    PubMed

    Tigano, Anna; Friesen, Vicki L

    2016-05-01

    Gene flow is a fundamental evolutionary force in adaptation that is especially important to understand as humans are rapidly changing both the natural environment and natural levels of gene flow. Theory proposes a multifaceted role for gene flow in adaptation, but it focuses mainly on the disruptive effect that gene flow has on adaptation when selection is not strong enough to prevent the loss of locally adapted alleles. The role of gene flow in adaptation is now better understood due to the recent development of both genomic models of adaptive evolution and genomic techniques, which both point to the importance of genetic architecture in the origin and maintenance of adaptation with gene flow. In this review, we discuss three main topics on the genomics of adaptation with gene flow. First, we investigate selection on migration and gene flow. Second, we discuss the three potential sources of adaptive variation in relation to the role of gene flow in the origin of adaptation. Third, we explain how local adaptation is maintained despite gene flow: we provide a synthesis of recent genomic models of adaptation, discuss the genomic mechanisms and review empirical studies on the genomics of adaptation with gene flow. Despite predictions on the disruptive effect of gene flow in adaptation, an increasing number of studies show that gene flow can promote adaptation, that local adaptations can be maintained despite high gene flow, and that genetic architecture plays a fundamental role in the origin and maintenance of local adaptation with gene flow. © 2016 John Wiley & Sons Ltd.

  16. TrafficGen Architecture Document

    DTIC Science & Technology

    2016-01-01

    sequence diagram ....................................................5 Fig. 5 TrafficGen traffic flows viewed in SDT3D...Scripts contain commands to have the network node listen on specific ports and flows describing the start time, stop time, and specific traffic ...arranged vertically and time presented horizontally. Individual traffic flows are represented by horizontal bars indicating the start time, stop time

  17. A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Hwa-Soo

    1991-01-01

    A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.

  18. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform

    PubMed Central

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015

  19. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.

    PubMed

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.

  20. Spacelab output processing system architectural study

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Two different system architectures are presented. The two architectures are derived from two different data flows within the Spacelab Output Processing System. The major differences between these system architectures are in the position of the decommutation function (the first architecture performs decommutation in the latter half of the system and the second architecture performs that function in the front end of the system). In order to be examined, the system was divided into five stand-alone subsystems; Work Assembler, Mass Storage System, Output Processor, Peripheral Pool, and Resource Monitor. The work load of each subsystem was estimated independent of the specific devices to be used. The candidate devices were surveyed from a wide sampling of off-the-shelf devices. Analytical expressions were developed to quantify the projected workload in conjunction with typical devices which would adequately handle the subsystem tasks. All of the study efforts were then directed toward preparing performance and cost curves for each architecture subsystem.

  1. E-health and healthcare enterprise information system leveraging service-oriented architecture.

    PubMed

    Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Cheng, Po-Hsun; Lai, Feipei

    2012-04-01

    To present the successful experiences of an integrated, collaborative, distributed, large-scale enterprise healthcare information system over a wired and wireless infrastructure in National Taiwan University Hospital (NTUH). In order to smoothly and sequentially transfer from the complex relations among the old (legacy) systems to the new-generation enterprise healthcare information system, we adopted the multitier framework based on service-oriented architecture to integrate the heterogeneous systems as well as to interoperate among many other components and multiple databases. We also present mechanisms of a logical layer reusability approach and data (message) exchange flow via Health Level 7 (HL7) middleware, DICOM standard, and the Integrating the Healthcare Enterprise workflow. The architecture and protocols of the NTUH enterprise healthcare information system, especially in the Inpatient Information System (IIS), are discussed in detail. The NTUH Inpatient Healthcare Information System is designed and deployed on service-oriented architecture middleware frameworks. The mechanisms of integration as well as interoperability among the components and the multiple databases apply the HL7 standards for data exchanges, which are embedded in XML formats, and Microsoft .NET Web services to integrate heterogeneous platforms. The preliminary performance of the current operation IIS is evaluated and analyzed to verify the efficiency and effectiveness of the designed architecture; it shows reliability and robustness in the highly demanding traffic environment of NTUH. The newly developed NTUH IIS provides an open and flexible environment not only to share medical information easily among other branch hospitals, but also to reduce the cost of maintenance. The HL7 message standard is widely adopted to cover all data exchanges in the system. All services are independent modules that enable the system to be deployed and configured to the highest degree of flexibility. Furthermore, we can conclude that the multitier Inpatient Healthcare Information System has been designed successfully and in a collaborative manner, based on the index of performance evaluations, central processing unit, and memory utilizations.

  2. Solid Oxide Fuel Cell APU Feasibility Study for a Long Range Commercial Aircraft Using UTC ITAPS Approach. Volume 1; Aircraft Propulsion and Subsystems Integration Evaluation

    NASA Technical Reports Server (NTRS)

    Srinivasan, Hari; Yamanis, Jean; Welch, Rick; Tulyani, Sonia; Hardin, Larry

    2006-01-01

    The objective of this contract effort was to define the functionality and evaluate the propulsion and power system benefits derived from a Solid Oxide Fuel Cell (SOFC) based Auxiliary Power Unit (APU) for a future long range commercial aircraft, and to define the technology gaps to enable such a system. The study employed technologies commensurate with Entry into Service (EIS) in 2015. United Technologies Corporation (UTC) Integrated Total Aircraft Power System (ITAPS) methodologies were used to evaluate system concepts to a conceptual level of fidelity. The technology benefits were captured as reductions of the mission fuel burn and emissions. The baseline aircraft considered was the Boeing 777-200ER airframe with more electric subsystems, Ultra Efficient Engine Technology (UEET) engines, and an advanced APU with ceramics for increased efficiency. In addition to the baseline architecture, four architectures using an SOFC system to replace the conventional APU were investigated. The mission fuel burn savings for Architecture-A, which has minimal system integration, is 0.16 percent. Architecture-B and Architecture-C employ greater system integration and obtain fuel burn benefits of 0.44 and 0.70 percent, respectively. Architecture-D represents the highest level of integration and obtains a benefit of 0.77 percent.

  3. Field Investigations of the July 2015 Pyroclastic Density Current Deposits of Volcán de Colima, Mexico

    NASA Astrophysics Data System (ADS)

    Atlas, Z. D.; Macorps, E.; Charbonnier, S. J.; Varley, N. R.

    2016-12-01

    Small-volume pyroclastic density currents (PDCs) occur relatively frequently and pose severe threats to surrounding populations and infrastructures at active explosive volcanoes. They are characterized by short duration and complex multiphase flow dynamics due to time and space variability in their properties, which include amongst others, particle concentration, granulometry, componentry, bulk rheology and velocity. Field investigations of the deposits emplaced by small-volume concentrated PDCs aim to improve our understanding of the transport and depositional processes of these flows: time and space variations in flow dynamics within a PDC moving downslope will reflect on the distribution, grainsize and component characteristics of its deposits. Our study focuses on the recent events of July 10th and 11th, 2015 at Volcán de Colima (Mexico) where the collapse of the recent lava dome complex and a portion of the southern crater rim led to the emplacement of successive pulses of small-volume concentrated PDCs on the southern flank, along the Montegrande and San Antonio ravines. A 3-dimensional field analysis of the PDCs' deposit architecture, total grain size distribution and component properties together with a geomorphic analysis of the affected ravines provide new insights on the lateral and vertical variations of flow dynamics for some of these small-volume concentrated PDCs. Preliminary results reveal three stratigraphic units with massive block, lapilli, ash facies within the valley confined and concentrated overbank deposits with increasing content in fines with distance from the summit, suggesting an increase in fragmentation processes within the PDCs. The middle unit is characterized by a finer grainsize, a higher accidental lithic content and a lower free crystal content. Moreover, direct correlations are found between rapid changes in channel morphology and generation of overbank (unconfined) flows that escaped valley confines, which could provide the basis for defining hazard zonations of key areas at risk from future eruptions at Colima.

  4. Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures

    PubMed Central

    Bryson, David M.; Ofria, Charles

    2013-01-01

    We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. PMID:24376669

  5. Considerations on the Use of Custom Accelerators for Big Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Minutoli, Marco

    Accelerators, including Graphic Processing Units (GPUs) for gen- eral purpose computation, many-core designs with wide vector units (e.g., Intel Phi), have become a common component of many high performance clusters. The appearance of more stable and reliable tools tools that can automatically convert code written in high-level specifications with annotations (such as C or C++) to hardware de- scription languages (High-Level Synthesis - HLS), is also setting the stage for a broader use of reconfigurable devices (e.g., Field Pro- grammable Gate Arrays - FPGAs) in high performance system for the implementation of custom accelerators, helped by the fact that newmore » processors include advanced cache-coherent interconnects for these components. In this chapter, we briefly survey the status of the use of accelerators in high performance systems targeted at big data analytics applications. We argue that, although the progress in the use of accelerators for this class of applications has been sig- nificant, differently from scientific simulations there still are gaps to close. This is particularly true for the ”irregular” behaviors exhibited by no-SQL, graph databases. We focus our attention on the limits of HLS tools for data analytics and graph methods, and discuss a new architectural template that better fits the requirement of this class of applications. We validate the new architectural templates by mod- ifying the Graph Engine for Multithreaded System (GEMS) frame- work to support accelerators generated with such a methodology, and testing with queries coming from the Lehigh University Benchmark (LUBM). The architectural template enables better supporting the task and memory level parallelism present in graph methods by sup- porting a new control model and a enhanced memory interface. We show that out solution allows generating parallel accelerators, pro- viding speed ups with respect to conventional HLS flows. We finally draw conclusions and present a perspective on the use of reconfig- urable devices and Design Automation tools for data analytics.« less

  6. Geometry and architecture of faults in a syn-rift normal fault array: The Nukhul half-graben, Suez rift, Egypt

    NASA Astrophysics Data System (ADS)

    Wilson, Paul; Gawthorpe, Rob L.; Hodgetts, David; Rarity, Franklin; Sharp, Ian R.

    2009-08-01

    The geometry and architecture of a well exposed syn-rift normal fault array in the Suez rift is examined. At pre-rift level, the Nukhul fault consists of a single zone of intense deformation up to 10 m wide, with a significant monocline in the hanging wall and much more limited folding in the footwall. At syn-rift level, the fault zone is characterised by a single discrete fault zone less than 2 m wide, with damage zone faults up to approximately 200 m into the hanging wall, and with no significant monocline developed. The evolution of the fault from a buried structure with associated fault-propagation folding, to a surface-breaking structure with associated surface faulting, has led to enhanced bedding-parallel slip at lower levels that is absent at higher levels. Strain is enhanced at breached relay ramps and bends inherited from pre-existing structures that were reactivated during rifting. Damage zone faults observed within the pre-rift show ramp-flat geometries associated with contrast in competency of the layers cut and commonly contain zones of scaly shale or clay smear. Damage zone faults within the syn-rift are commonly very straight, and may be discrete fault planes with no visible fault rock at the scale of observation, or contain relatively thin and simple zones of scaly shale or gouge. The geometric and architectural evolution of the fault array is interpreted to be the result of (i) the evolution from distributed trishear deformation during upward propagation of buried fault tips to surface faulting after faults breach the surface; (ii) differences in deformation response between lithified pre-rift units that display high competence contrasts during deformation, and unlithified syn-rift units that display low competence contrasts during deformation, and; (iii) the history of segmentation, growth and linkage of the faults that make up the fault array. This has important implications for fluid flow in fault zones.

  7. Branched hybrid vessel: in vitro loaded hydrodynamic forces influence the tissue architecture.

    PubMed

    Kobashi, T; Matsuda, T

    2000-01-01

    This study was conducted to investigate how a continuous load of hydrodynamic stresses influences the tissue architecture of a branched hybrid vessel in vitro. Tubular hybrid medial tissue of small (3 mm) and large (6 mm) diameters, prepared by thermal gelation of a cold mixed solution of bovine smooth muscle cells (SMCs) and type I collagen in glass molds, was assembled into a branched hybrid medial tissue by end-to-side anastomosis. After a 2-week culture period, bovine endothelial cells (ECs) were seeded onto the luminal surface. The branched hybrid vessel was connected to a mock circulatory loop system and tested for two modes of flow: 1) low flow rate for 24 h, 2) high flow rate for 24 or 72 h. After exposure to a low flow rate for 24 h, cobblestone appearance of the ECs was dominant. After exposure to a high flow rate, EC alignment in the direction of flow was observed in the branch region, except at the region of predicted flow separation where ECs retained their polygonal configuration. Elongation of SMCs with no preferential orientation was observed in the case of vessels exposed to a high flow rate for 24 h, and circumferential orientation was prominent in those exposed to a high flow rate for 72 h. On the other hand, collagen fibrils exhibited no preferential orientation in either case. After injection of Evans blue-albumin conjugate into the circulating medium, the luminal surface of the hybrid vessel exposed to a high flow rate for 24 h was examined by confocal laser scanning microscopy. The fluorescence intensity was low at the high shear zone in the branch region, while at the flow separation region it was very high, indicating the increased albumin permeability at the latter region. These findings reflect region-specific tissue architecture in the branch region, in response to the local flow pattern, and may provide an in vitro atherosclerosis model as well as a fundamental basis for the development of functional branched hybrid grafts.

  8. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  9. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  10. Optical multicast system for data center networks.

    PubMed

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.

  11. From a collage of microplates to stable continental crust - an example from Precambrian Europe

    NASA Astrophysics Data System (ADS)

    Korja, Annakaisa

    2013-04-01

    Svecofennian orogen (2.0-1.7 Ga) comprises the oldest undispersed orogenic belt on Baltica and Eurasian plate. Svecofennian orogenic belt evolved from a series of short-lived terrane accretions around Baltica's Archean nucleus during the formation of the Precambrian Nuna supercontinent. Geological and geophysical datasets indicate W-SW growth of Baltica with NE-ward dipping subduction zones. The data suggest a long-lived retreating subduction system in the southwestern parts whereas in the northern and central parts the northeasterly transport of continental fragments or microplates towards the continental nucleus is also documented. The geotectonic environment resembles that of the early stages of the Alpine-Himalayan or Indonesian orogenic system, in which dispersed continental fragments, arcs and microplates have been attached to the Eurasian plate margin. Thus the Svecofennian orogeny can be viewed as proxy for the initial stages of an internal orogenic system. Svecofennian orogeny is a Paleoproterozoic analogue of an evolved orogenic system where terrane accretion is followed by lateral spreading or collapse induced by change in the plate architecture. The exposed parts are composed of granitoid intrusions as well as highly deformed supracrustal units. Supracrustal rocks have been metamorphosed in LP-HT conditions in either paleo-lower-upper crust or paleo-upper-middle crust. Large scale seismic reflection profiles (BABEL and FIRE) across Baltica image the crust as a collage of terranes suggesting that the bedrock has been formed and thickened in sequential accretions. The profiles also image three fold layering of the thickened crust (>55 km) to transect old terrane boundaries, suggesting that the over-thickened bedrock structures have been rearranged in post-collisional spreading and/or collapse processes. The middle crust displays typical large scale flow structures: herringbone and anticlinal ramps, rooted onto large scale listric surfaces also suggestive of spreading. Close to the original ocean-continent plate boundary, in the core of the Svecofennian orogen, the thickened accretionary crust carries pervasive stretching lineations at surface and seismic vp-velocity anisotropy in the crust. The direction of spreading and crustal flow seems to be diverted by shapes of the pre-existing boundaries. It is concluded that lateral spreading and midcrustal flow not only rearrange the bedrock architecture but also stabilize the young accreted continental crust in emerging internal orogenic systems. Pre-existing microplate/terrane boundaries will affect the final architecture of the orogenic belt.

  12. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  13. Architecture Controls on Reservoir Performance of Zubair Formation, Rumaila and West Qurna Oilfields in the Southern Iraq

    NASA Astrophysics Data System (ADS)

    Al-Ziayyir, Haitham; Hodgetts, David

    2015-04-01

    The main reservoir in Rumaila /West Qurna oilfields is the Zubair Formation of Hautervian and Barremian age. This silicilastic formation extends over the regions of central and southern Iraq. This study attempts to improve the understanding of the architectural elements and their control on fluid flow paths within the Zubair Formation. A significant source of uncertainty in the zubair formation is the control on hydrodynamic pressure distribution. The reasons for pressure variation in the Zubair are not well understood. This work aims to reduce this uncertainty by providing a more detailed knowledge of reservoir architecture, distribution of barriers and baffles, and reservoir compartmentalization. To characterize the stratigraphic architecture of the Zubair formation,high resolution reservoir models that incorporate dynamic and static data were built. Facies modelling is accomplished by means of stochastic modelling techniques.The work is based on a large data set collected from the Rumaila oilfields. These data, comprising conventional logs of varying vintages, NMR logs, cores from six wells, and pressure data, were used for performing geological and petrophysical analyses.Flow simulation studies have also been applied to examine the impact of architecture on recovery. Understanding of geology and reservoir performance can be greatly improved by using an efficient, quick and viable integrated analysis, interpretation, and modelling.

  14. Progress in a novel architecture for high performance processing

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  15. 3D architecture of cyclic-step and antidune deposits in glacigenic subaqueous fan and delta settings: Integrating outcrop and ground-penetrating radar data

    NASA Astrophysics Data System (ADS)

    Lang, Jörg; Sievers, Julian; Loewer, Markus; Igel, Jan; Winsemann, Jutta

    2017-12-01

    Bedforms related to supercritical flows are increasingly recognised as important constituents of many depositional environments, but outcrop studies are commonly hampered by long bedform wavelengths and complex three-dimensional geometries. We combined outcrop-based facies analysis with ground-penetrating radar (GPR) surveys to analyse the 3D facies architecture of subaqueous ice-contact fan and glacifluvial delta deposits. The studied sedimentary systems were deposited at the margins of the Middle Pleistocene Scandinavian ice sheets in Northern Germany. Glacifluvial Gilbert-type deltas are characterised by steeply dipping foreset beds, comprising cyclic-step deposits, which alternate with antidune deposits. Deposits of cyclic steps consist of lenticular scours infilled by backset cross-stratified pebbly sand and gravel. The GPR sections show that the scour fills form trains along the delta foresets, which can locally be traced for up to 15 m. Perpendicular and oblique to palaeoflow direction, these deposits appear as troughs with concentric or low-angle cross-stratified infills. Downflow transitions from scour fills into sheet-like low-angle cross-stratified or sinusoidally stratified pebbly sand, deposited by antidunes, are common. Cyclic steps and antidunes were deposited by sustained and surge-type supercritical density flows, which were related to hyperpycnal flows, triggered by major meltwater discharge or slope-failure events. Subaqueous ice-contact fan deposits include deposits of progradational scour fills, isolated hydraulic jumps, antidunes and (humpback) dunes. The gravel-rich fan succession consists of vertical stacks of laterally amalgamated pseudo-sheets, indicating deposition by pulses of waning supercritical flows under high aggradation rates. The GPR sections reveal the large-scale architecture of the sand-rich fan succession, which is characterised by lobe elements with basal erosional surfaces associated with scours filled with backsets related to hydraulic jumps, passing upwards and downflow into deposits of antidunes and (humpback) dunes. The recurrent facies architecture of the lobe elements and their prograding and retrograding stacking pattern are interpreted as related to autogenic flow morphodynamics.

  16. Biologically inspired dynamic material systems.

    PubMed

    Studart, André R

    2015-03-09

    Numerous examples of material systems that dynamically interact with and adapt to the surrounding environment are found in nature, from hair-based mechanoreceptors in animals to self-shaping seed dispersal units in plants to remodeling bone in vertebrates. Inspired by such fascinating biological structures, a wide range of synthetic material systems have been created to replicate the design concepts of dynamic natural architectures. Examples of biological structures and their man-made counterparts are herein revisited to illustrate how dynamic and adaptive responses emerge from the intimate microscale combination of building blocks with intrinsic nanoscale properties. By using top-down photolithographic methods and bottom-up assembly approaches, biologically inspired dynamic material systems have been created 1) to sense liquid flow with hair-inspired microelectromechanical systems, 2) to autonomously change shape by utilizing plantlike heterogeneous architectures, 3) to homeostatically influence the surrounding environment through self-regulating adaptive surfaces, and 4) to spatially concentrate chemical species by using synthetic microcompartments. The ever-increasing complexity and remarkable functionalities of such synthetic systems offer an encouraging perspective to the rich set of dynamic and adaptive properties that can potentially be implemented in future man-made material systems. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  18. Integrating the Web and continuous media through distributed objects

    NASA Astrophysics Data System (ADS)

    Labajo, Saul P.; Garcia, Narciso N.

    1998-09-01

    The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.

  19. The genetic architecture of local adaptation and reproductive isolation in sympatry within the Mimulus guttatus species complex.

    PubMed

    Ferris, Kathleen G; Barnett, Laryssa L; Blackman, Benjamin K; Willis, John H

    2017-01-01

    The genetic architecture of local adaptation has been of central interest to evolutionary biologists since the modern synthesis. In addition to classic theory on the effect size of adaptive mutations by Fisher, Kimura and Orr, recent theory addresses the genetic architecture of local adaptation in the face of ongoing gene flow. This theory predicts that with substantial gene flow between populations local adaptation should proceed primarily through mutations of large effect or tightly linked clusters of smaller effect loci. In this study, we investigate the genetic architecture of divergence in flowering time, mating system-related traits, and leaf shape between Mimulus laciniatus and a sympatric population of its close relative M. guttatus. These three traits are probably involved in M. laciniatus' adaptation to a dry, exposed granite outcrop environment. Flowering time and mating system differences are also reproductive isolating barriers making them 'magic traits'. Phenotypic hybrids in this population provide evidence of recent gene flow. Using next-generation sequencing, we generate dense SNP markers across the genome and map quantitative trait loci (QTLs) involved in flowering time, flower size and leaf shape. We find that interspecific divergence in all three traits is due to few QTL of large effect including a highly pleiotropic QTL on chromosome 8. This QTL region contains the pleiotropic candidate gene TCP4 and is involved in ecologically important phenotypes in other Mimulus species. Our results are consistent with theory, indicating that local adaptation and reproductive isolation with gene flow should be due to few loci with large and pleiotropic effects. © 2016 John Wiley & Sons Ltd.

  20. Systematic design of 3D auxetic lattice materials with programmable Poisson's ratio for finite strains

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen

    2018-05-01

    This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.

  1. The Role of Flow Experience and CAD Tools in Facilitating Creative Behaviours for Architecture Design Students

    ERIC Educational Resources Information Center

    Dawoud, Husameddin M.; Al-Samarraie, Hosam; Zaqout, Fahed

    2015-01-01

    This study examined the role of flow experience in intellectual activity with an emphasis on the relationship between flow experience and creative behaviour in design using CAD. The study used confluence and psychometric approaches because of their unique abilities to depict a clear image of creative behaviour. A cross-sectional study…

  2. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The regional suitability of underground construction as a climate control technique is discussed with reference to (1) a bioclimatic analysis of long-term weather data for 29 locations in the United States to determine appropriate above ground climate control techniques, (2) a data base of synthesized ground temperatures for the coterminous United States, and (3) monthly dew point ground temperature comparisons for identifying the relative likelihood of condensation from one region to another. It is concluded that the suitability of earth tempering as a practice and of specific earth-sheltered design stereotypes varies geographically; while the subsurface almost always provides a thermalmore » advantage on its own terms when compared to above ground climatic data, it can, nonetheless, compromise the effectiveness of other, regionally more important climate control techniques. Also contained in the report are reviews of above and below ground climate mapping schemes related to human comfort and architectural design, and detailed description of a theoretical model of ground temperature, heat flow, and heat storage in the ground. Strategies of passive climate control are presented in a discussion of the building bioclimatic analysis procedure which has been applied in a computer analysis of 30 years of weather data for each of 29 locations in the United States.« less

  4. Proposed hardware architectures of particle filter for object tracking

    NASA Astrophysics Data System (ADS)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  5. Explicit parametric solutions of lattice structures with proper generalized decomposition (PGD) - Applications to the design of 3D-printed architectured materials

    NASA Astrophysics Data System (ADS)

    Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro

    2018-01-01

    Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.

  6. SynchroPhasor Measurements: System Architecture and Performance Evaluation in Supporting Wide-Area Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Dagle, Jeffery E.

    2008-07-31

    The infrastructure of phasor measurements have evolved over the last two decades from isolated measurement units to networked measurement systems with footprints beyond individual utility companies. This is, to a great extent, a bottom-up self-evolving process except some local systems built by design. Given the number of phasor measurement units (PMUs) in the system is small (currently 70 each in western and eastern interconnections), current phasor network architecture works just fine. However, the architecture will become a bottleneck when large number of PMUs are installed (e.g. >1000~10000). The need for phasor architecture design has yet to be addressed. This papermore » reviews the current phasor networks and investigates future architectures, as related to the efforts undertaken by the North America SynchroPhasor Initiative (NASPI). Then it continues to present staged system tests to evaluate the performance of phasor networks, which is a common practice in the Western Electricity Coordinating Council (WECC) system. This is followed by field measurement evaluation and the implication of phasor quality issues on phasor applications.« less

  7. An AMS study of different silicic units from the southern Paraná-Etendeka Magmatic Province in Brazil: Implications for the identification of flow directions and local sources

    NASA Astrophysics Data System (ADS)

    Guimarães, L. F.; Raposo, M. I. B.; Janasi, V. A.; Cañón-Tapia, E.; Polo, L. A.

    2018-04-01

    In the Southern portion of the Paraná-Etendeka Magmatic Province in Brazil, extensive silicic (dacite-rhyolite) deposits occur at the top of a sequence of low-Ti pahoehoe to rubbly basalts. The internal architecture of the silicic deposits and their eruptive style, as well as the location of their sources are still unsatisfactorily known. In an attempt to provide independent evidence for flow directions in deposits previously characterized as effusive, and test the hypothesis of local sources, we carried out anisotropy of magnetic susceptibility (AMS) studies on the two main silicic units (Caxias do Sul dacites and Santa Maria Rhyolites) with the best exposures in an area previously mapped in detail. Magnetic anisotropies were determined on oriented cylindrical specimens from a total of 28 sites. Rock magnetism properties indicate that "pseudo-single-domain" magnetite carries the fabrics and the remanence. Magnetic fabrics were determined by applying anisotropy of low-field magnetic susceptibility (AMS) and anisotropy of anhysteretic remanent magnetization (AARM). Both AMS and AARM tensors are coaxial, indicating that the AMS fabric is not affected by the effect of magnetite single-domain grains. Magnetic data from several dacitic coulées (Caxias do Sul unit) indicate flows from SE to NW. The location and spatial distribution of these lavas support the hypothesis of local sources, aligned along a NE-SW trend. These data are in agreement with the alignments of structures (dome-shaped hills) observed in field work and DEM images. On the other hand, magnetic data obtained in Santa Maria rhyolites indicate that flow directions in two different areas are distinct (towards NW/NE and W), suggesting that they derived from different emission centers. So, regarding the silicic volcanism in the studied region, our data do not support the model which classifies the entire silicic volcanism of the province as extensive rheomorphic pyroclastic deposits released from a central conduit. In contrast, we propose the occurrence of local volcanic events, implying in the existence of different sources, possibly characterized by local emission centers.

  8. Progress in Unsteady Turbopump Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Chan, William; Kwak, Dochan; Williams, Robert

    2002-01-01

    This viewgraph presentation discusses unsteady flow simulations for a turbopump intended for a reusable launch vehicle (RLV). The simulation process makes use of computational grids and parallel processing. The architecture of the parallel computers used is discussed, as is the scripting of turbopump simulations.

  9. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    PubMed

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  10. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  11. Utilizing Expert Knowledge in Estimating Future STS Costs

    NASA Technical Reports Server (NTRS)

    Fortner, David B.; Ruiz-Torres, Alex J.

    2004-01-01

    A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.

  12. A reference architecture for the component factory

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni

    1992-01-01

    Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.

  13. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants.

    PubMed

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-11-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier-Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. © 2014 Wiley Periodicals, Inc.

  14. A Modular Approach to Arithmetic and Logic Unit Design on a Reconfigurable Hardware Platform for Educational Purpose

    NASA Astrophysics Data System (ADS)

    Oztekin, Halit; Temurtas, Feyzullah; Gulbag, Ali

    The Arithmetic and Logic Unit (ALU) design is one of the important topics in Computer Architecture and Organization course in Computer and Electrical Engineering departments. There are ALU designs that have non-modular nature to be used as an educational tool. As the programmable logic technology has developed rapidly, it is feasible that ALU design based on Field Programmable Gate Array (FPGA) is implemented in this course. In this paper, we have adopted the modular approach to ALU design based on FPGA. All the modules in the ALU design are realized using schematic structure on Altera's Cyclone II Development board. Under this model, the ALU content is divided into four distinct modules. These are arithmetic unit except for multiplication and division operations, logic unit, multiplication unit and division unit. User can easily design any size of ALU unit since this approach has the modular nature. Then, this approach was applied to microcomputer architecture design named BZK.SAU.FPGA10.0 instead of the current ALU unit.

  15. Functional Performance of an Enabling Atmosphere Revitalization Subsystem Architecture for Deep Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Abney, Morgan B.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Newton, Robert L.; Parrish, Keith J.; Roman, Monsi C.; Takada, Kevin C.; Miller, Lee A.; hide

    2013-01-01

    A subsystem architecture derived from the International Space Station's (ISS) Atmosphere Revitalization Subsystem (ARS) has been functionally demonstrated. This ISS-derived architecture features re-arranged unit operations for trace contaminant control and carbon dioxide removal functions, a methane purification component as a precursor to enhance resource recovery over ISS capability, operational modifications to a water electrolysis-based oxygen generation assembly, and an alternative major atmospheric constituent monitoring concept. Results from this functional demonstration are summarized and compared to the performance observed during ground-based testing conducted on an ISS-like subsystem architecture. Considerations for further subsystem architecture and process technology development are discussed.

  16. Mechanics of blood supply to the heart: wave reflection effects in a right coronary artery.

    PubMed Central

    Zamir, M

    1998-01-01

    Mechanics of blood flow in the coronary circulation have in the past been based largely on models in which the detailed architecture of the coronary network is not included because of lack of data: properties of individual vessels do not appear individually in the model but are represented collectively by the elements of a single electric circuit. Recent data from the human heart make it possible, for the first time, to examine the dynamics of flow in the coronary network based on detailed, measured vascular architecture. In particular, admittance values along the full course of the right coronary artery are computed based on actual lengths and diameters of the many thousands of branches which make up the distribution system of this vessel. The results indicate that effects of wave reflections on this flow are far more significant than those generally suspected to occur in coronary blood flow and that they are actually the reverse of the well known wave reflection effects in the aorta. PMID:9523440

  17. Hierarchical tailoring of strut architecture to control permeability of additive manufactured titanium implants.

    PubMed

    Zhang, Z; Jones, D; Yue, S; Lee, P D; Jones, J R; Sutcliffe, C J; Jones, E

    2013-10-01

    Porous titanium implants are a common choice for bone augmentation. Implants for spinal fusion and repair of non-union fractures must encourage blood flow after implantation so that there is sufficient cell migration, nutrient and growth factor transport to stimulate bone ingrowth. Additive manufacturing techniques allow a large number of pore network designs. This study investigates how the design factors offered by selective laser melting technique can be used to alter the implant architecture on multiple length scales to control and even tailor the flow. Permeability is a convenient parameter that characterises flow, correlating to structure openness (interconnectivity and pore window size), tortuosity and hence flow shear rates. Using experimentally validated computational simulations, we demonstrate how additive manufacturing can be used to tailor implant properties by controlling surface roughness at a microstructual level (microns), and by altering the strut ordering and density at a mesoscopic level (millimetre). Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  19. HESS Opinions: Functional units: a novel framework to explore the link between spatial organization and hydrological functioning of intermediate scale catchments

    NASA Astrophysics Data System (ADS)

    Zehe, E.; Ehret, U.; Pfister, L.; Blume, T.; Schröder, B.; Westhoff, M.; Jackisch, C.; Schymanski, S. J.; Weiler, M.; Schulz, K.; Allroggen, N.; Tronicke, J.; Dietrich, P.; Scherer, U.; Eccard, J.; Wulfmeyer, V.; Kleidon, A.

    2014-03-01

    This opinion paper proposes a novel framework for exploring how spatial organization alongside with spatial heterogeneity controls functioning of intermediate scale catchments of organized complexity. Key idea is that spatial organization in landscapes implies that functioning of intermediate scale catchments is controlled by a hierarchy of functional units: hillslope scale lead topologies and embedded elementary functional units (EFUs). We argue that similar soils and vegetation communities and thus also soil structures "co-developed" within EFUs in an adaptive, self-organizing manner as they have been exposed to similar flows of energy, water and nutrients from the past to the present. Class members of the same EFU (class) are thus deemed to belong to the same ensemble with respect to controls of the energy balance and related vertical flows of capillary bounded soil water and heat. Class members of superordinate lead topologies are characterized by the same spatially organized arrangement of EFUs along the gradient driving lateral flows of free water as well as a similar surface and bedrock topography. We hence postulate that they belong to the same ensemble with respect to controls on rainfall runoff transformation and related vertical and lateral fluxes of free water. We expect class members of these functional units to have a distinct way how their architecture controls the interplay of state dynamics and integral flows, which is typical for all members of one class but dissimilar among the classes. This implies that we might infer on the typical dynamic behavior of the most important classes of EFU and lead topologies in a catchment, by thoroughly characterizing a few members of each class. A major asset of the proposed framework, which steps beyond the concept of hydrological response units, is that it can be tested experimentally. In this respect, we reflect on suitable strategies based on stratified observations drawing from process hydrology, soil physics, geophysics, ecology and remote sensing which are currently conducted in replicates of candidate functional units in the Attert basin (Luxembourg), to search for typical and similar functional and structural characteristics. A second asset of this framework is that it blueprints a way towards a structurally more adequate model concept for water and energy cycles in intermediate scale catchments, which balances necessary complexity with falsifiability. This is because EFU and lead topologies are deemed to mark a hierarchy of "scale breaks" where simplicity with respect to the energy balance and stream flow generation emerges from spatially organized process-structure interactions. This offers the opportunity for simplified descriptions of these processes that are nevertheless physically and thermodynamically consistent. In this respect we reflect on a candidate model structure that (a) may accommodate distributed observations of states and especially terrestrial controls on driving gradients to constrain the space of feasible model structures and (b) allows testing the possible added value of organizing principles to understand the role of spatial organization from an optimality perspective.

  20. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  1. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  2. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  3. Proceedings of the International Conference on Parallel Architectures and Compilation Techniques Held 24-26 August 1994 in Montreal, Canada

    DTIC Science & Technology

    1994-08-26

    an Itegrated Circuit Global Router. In Proc. of PPEARS 88, pages 138-145, 1988. [7] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An...Computer Architecture, 1992. [5] S. Sakai, Y. Yamaguchi, K. Hiraki , Y. Kodama, and T. Yuba. An architecture of a data-flow single chip processor. In Int...EM-4 and sparing time for tech- nical discussions. We also thank Prof. Kei Hiraki at the Univ. of Tokyo for his helpful comments. Hidehiko Masuhara’s

  4. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  5. Study of active noise control system for a commercial HVAC unit

    NASA Astrophysics Data System (ADS)

    Devineni, Naga

    Acoustic noise is a common problem in everyday life. If the appliances that are present in the work and living areas generate noise then it's a serious problem. One such appliance is the Heating, Ventilation and Air-conditioning system (HVAC) in which blower fan and compressor units are housed together. Operation of a HVAC system creates two kinds of noise. One is the noise due to the air flow and the other is the result of the compressor. Both of them exhibit different signal properties and need different strategies to control them. There has been previous efforts in designing noise control systems that can control noise from the HVAC system. These include passive methods which use sound absorption materials to attenuate noise and active methods which cancel noise by generating anti-noise. Passive methods are effective in limiting the high frequency noise, but are inefficient in controlling low frequency noise from the compressor. Compressor noise is one of the strong low frequency components that propagate through the walls, therefore there is need for deploying active signal processing methods that consider the signal properties into consideration to cancel the noise acoustically. The quasi periodic nature of the compressor noise is exploited in noise modeling which aids in implementing an adaptive linear prediction filter in estimating the anti noise [12]. In this thesis, a multi channel architecture has been studied for a specific HVAC system in order to improve noise cancellation by creating larger quiet zone. In addition to the multi-channel architecture, a real time narrow band Active Noise Control (ANC) was employed to cancel noise under practical conditions.

  6. Capability-Based Modeling Methodology: A Fleet-First Approach to Architecture

    DTIC Science & Technology

    2014-02-01

    reconnaissance (ISR) aircraft , or unmanned systems . Accordingly, a mission architecture used to model SAG operations for a given Fleet unit should include all...would use an ISR aircraft to increase fidelity of a targeting solution; another mission thread to show how unmanned systems can augment targeting... unmanned systems . Therefore, an architect can generate, from a comprehensive SAG mission architecture, individual mission threads that model how a SAG

  7. Inert gas clearance from tissue by co-currently and counter-currently arranged microvessels

    PubMed Central

    Lu, Y.; Michel, C. C.

    2012-01-01

    To elucidate the clearance of dissolved inert gas from tissues, we have developed numerical models of gas transport in a cylindrical block of tissue supplied by one or two capillaries. With two capillaries, attention is given to the effects of co-current and counter-current flow on tissue gas clearance. Clearance by counter-current flow is compared with clearance by a single capillary or by two co-currently arranged capillaries. Effects of the blood velocity, solubility, and diffusivity of the gas in the tissue are investigated using parameters with physiological values. It is found that under the conditions investigated, almost identical clearances are achieved by a single capillary as by a co-current pair when the total flow per tissue volume in each unit is the same (i.e., flow velocity in the single capillary is twice that in each co-current vessel). For both co-current and counter-current arrangements, approximate linear relations exist between the tissue gas clearance rate and tissue blood perfusion rate. However, the counter-current arrangement of capillaries results in less-efficient clearance of the inert gas from tissues. Furthermore, this difference in efficiency increases at higher blood flow rates. At a given blood flow, the simple conduction-capacitance model, which has been used to estimate tissue blood perfusion rate from inert gas clearance, underestimates gas clearance rates predicted by the numerical models for single vessel or for two vessels with co-current flow. This difference is accounted for in discussion, which also considers the choice of parameters and possible effects of microvascular architecture on the interpretation of tissue inert gas clearance. PMID:22604885

  8. Modeling and Improving Information Flows in the Development of Large Business Applications

    NASA Astrophysics Data System (ADS)

    Schneider, Kurt; Lübke, Daniel

    Designing a good architecture for an application is a wicked problem. Therefore, experience and knowledge are considered crucial for informing work in software architecture. However, many organizations do not pay sufficient attention to experience exploitation and architectural learning. Many users of information systems are not aware of the options and the needs to report problems and requirements. They often do not have time to describe a problem encountered in sufficient detail for developers to remove it. And there may be a lengthy process for providing feedback. Hence, the knowledge about problems and potential solutions is not shared effectively. Architectural knowledge needs to include evaluative feedback as well as decisions and their reasons (rationale).

  9. Fault tolerant architectures for integrated aircraft electronics systems, task 2

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.

    1984-01-01

    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.

  10. Architecture for multi-technology real-time location systems.

    PubMed

    Rodas, Javier; Barral, Valentín; Escudero, Carlos J

    2013-02-07

    The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position.

  11. [Architecture and movement].

    PubMed

    Rivallan, Armel

    2012-01-01

    Leading an architectural project means accompanying the movement which it induces within the teams. Between questioning, uncertainty and fear, the organisational changes inherent to the new facility must be subject to constructive and ongoing exchanges. Ethics, safety and training are revised and the unit projects are sometimes modified.

  12. Fault architecture and deformation processes within poorly lithified rift sediments, Central Greece

    NASA Astrophysics Data System (ADS)

    Loveless, Sian; Bense, Victor; Turner, Jenni

    2011-11-01

    Deformation mechanisms and resultant fault architecture are primary controls on the permeability of faults in poorly lithified sediments. We characterise fault architecture using outcrop studies, hand samples, thin sections and grain-size data from a minor (1-10 m displacement) normal-fault array exposed within Gulf of Corinth rift sediments, Central Greece. These faults are dominated by mixed zones with poorly developed fault cores and damage zones. In poorly lithified sediment deformation is distributed across the mixed zone as beds are entrained and smeared. We find particulate flow aided by limited distributed cataclasis to be the primary deformation mechanism. Deformation may be localised in more competent sediments. Stratigraphic variations in sediment competency, and the subsequent alternating distributed and localised strain causes complexities within the mixed zone such as undeformed blocks or lenses of cohesive sediment, or asperities at the mixed zone/protolith boundary. Fault tip bifurcation and asperity removal are important processes in the evolution of these fault zones. Our results indicate that fault zone architecture and thus permeability is controlled by a range of factors including lithology, stratigraphy, cementation history and fault evolution, and that minor faults in poorly lithified sediment may significantly impact subsurface fluid flow.

  13. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  14. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    PubMed

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  15. 26 CFR 1.993-1 - Definition of qualified export receipts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... this section. (h) Engineering and architectural services—(1) In general. Qualified export receipts of a DISC include gross receipts from engineering services (as described in subparagraph (5) of this... within or without the United States. (2) Services included. Engineering and architectural services...

  16. 26 CFR 1.993-1 - Definition of qualified export receipts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... this section. (h) Engineering and architectural services—(1) In general. Qualified export receipts of a DISC include gross receipts from engineering services (as described in subparagraph (5) of this... within or without the United States. (2) Services included. Engineering and architectural services...

  17. 26 CFR 1.993-1 - Definition of qualified export receipts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... this section. (h) Engineering and architectural services—(1) In general. Qualified export receipts of a DISC include gross receipts from engineering services (as described in subparagraph (5) of this... within or without the United States. (2) Services included. Engineering and architectural services...

  18. 26 CFR 1.993-1 - Definition of qualified export receipts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... this section. (h) Engineering and architectural services—(1) In general. Qualified export receipts of a DISC include gross receipts from engineering services (as described in subparagraph (5) of this... within or without the United States. (2) Services included. Engineering and architectural services...

  19. Statewide ITS Architecture Development : A Case Study. Arizona's Rural Statewide ITS Architecture : Building a Framework for Statewide ITS Integration.

    DOT National Transportation Integrated Search

    1999-09-01

    We have scanned the country and brought together the collective wisdom and expertise of transportation professionals implementing Intelligent Transportation Systems (ITS) projects across the United States. This information will prove helpful as you s...

  20. Design of Architectures and Materials in In-Plane Micro-supercapacitors: Current Status and Future Challenges.

    PubMed

    Qi, Dianpeng; Liu, Yan; Liu, Zhiyuan; Zhang, Li; Chen, Xiaodong

    2017-02-01

    The rapid development of integrated electronics and the boom in miniaturized and portable devices have increased the demand for miniaturized and on-chip energy storage units. Currently thin-film batteries or microsized batteries are commercially available for miniaturized devices. However, they still suffer from several limitations, such as short lifetime, low power density, and complex architecture, which limit their integration. Supercapacitors can surmount all these limitations. Particularly for micro-supercapacitors with planar architectures, due to their unique design of the in-plane electrode finger arrays, they possess the merits of easy fabrication and integration into on-chip miniaturized electronics. Here, the focus is on the different strategies to design electrode finger arrays and the material engineering of in-plane micro-supercapacitors. It is expected that the advances in micro-supercapacitors with in-plane architectures will offer new opportunities for the miniaturization and integration of energy-storage units for portable devices and on-chip electronics. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Paleolatitude Records of the Western Pacific as Determined From DSDP/ODP Basaltic Cores

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Zhao, X.; Yan, M.; Riisager, P.; Lo, C.

    2008-12-01

    We report here the new paleomagnetic, rock magnetic, and Ar-Ar geochronologic results of our recent completed project, which aims to determine the Cretaceous paleomagnetic paleolatitude record and the architecture of the volcanic basins in the western Pacific Ocean. The new results, in concert with our paleomagnetic research on ODP rocks recovered from the Ontong Java Plateau (OJP), suggest that various plateaus and basins in the western Pacific had similar plate-tectonic setting (paleolatitude) and ages with that of OJP at time of emplacement (~120 Ma). Basalts sampled from Deep Sea Drilling Project (DSDP) and Ocean Drilling Program (ODP) sites of the greater OJP as well as from obducted sections in the Solomon Islands of Malaita and Santa Isabel are strikingly uniform in petrologic and geochemical characteristics. Many of these cores, especially those from DSDP sites, have not been well-studied paleomagnetically and hence underutilized for tectonic study. We carefully re-sampled and systematic demagnetized and analyzed 925 basaltic cores from 15 sites drilled by10 DSDP/ODP Legs in the western and central Pacific, which represents a unique possibility for averaging out secular variation to obtain a well-defined paleolatitude estimate. The most important findings from this study include: (1). most basins formed during the Cretaceous long normal magnetic period with similar Ar-Ar ages as the OJP; (2) East Mariana, Pigafetta, the upper flow unit in the Nauru basin and Mid-Pacific Guyots all yielded similar paleolatitudes as those for OJP, suggesting the volcanic eruptions of flows in these basins are likely related to the emplacement of the OJP; and (3) the lower flow unit in the Nauru basin yields a paleolatitude that is ~10° further south and the age is more than 10 m.y. older than these of the OJP.

  2. The exhumation of the (U)HP rocks of the Central and Western Penninic Alps: comparison study between thermo-mechanical models and field data

    NASA Astrophysics Data System (ADS)

    Schenker, Filippo Luca; Schmalholz, Stefan M.; Baumgartner, Lukas P.; Pleuger, Jan

    2015-04-01

    The Central and Western Penninic (CWP) Alps form an orogenic wedge of imbricate tectonic nappes. Orogenic wedges form typically at depths < 60 km. Nevertheless, a few nappes and massifs (i.e. Adula/Cima Lunga, Dora-Maira, Monte Rosa, Gran Paradiso, Zermatt-Saas) exhibit High- and Ultra-High-Pressure (U)HP metamorphic rocks suggesting that they were buried by subduction to depths >60 km and subsequently exhumed into the accretionary wedge. Mechanically, the exhumation of the (U)HP rocks from mantle depths can be explained by two contrasting buoyancy-driven models: (1) overall return flow of rocks in a subduction channel and (2) upward flow of individual, lighter rock units within a heavier material (Stokes flow). In this study we compare published numerical exhumation models of (1) and (2) with structural and metamorphic data of the CWP Alps. Model (1) predicts the exhumation of large volumes of (U)HP rocks within a viscous channel (1100-500 km2 in a 2D cross-section through the subduction zone). The moderate volume (e.g. ~7 km2 in a geological cross-section of the UHP unit of the Dora-Maira) and the coherent architecture of the (U)HP nappes suggests that the exhumation through (1) is unlikely for (U)HP nappes of the CWP Alps. Model (2) predicts the exhumation of appropriate volumes of (U)HP rocks, but generally the (U)HP rocks exhume vertically in the overriding plate and are not incorporated into the orogenic wedge. Nevertheless, the exhumation through (2) is feasible either with a vertical or with an extremely viscous and dense subduction channel. Whether these characteristics are applicable to the CWP UHP nappes will be discussed in light of field observations.

  3. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi; Hixon, Duane

    1993-01-01

    The work done under this project was documented in detail as the Ph. D. dissertation of Dr. Duane Hixon. The objectives of the research project were evaluation of the generalized minimum residual method (GMRES) as a tool for accelerating 2-D and 3-D unsteady flows and evaluation of the suitability of the GMRES algorithm for unsteady flows, computed on parallel computer architectures.

  4. Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.

    PubMed

    Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein

    2015-12-01

    Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.

  5. Architecture and Politics in Central Europe

    DTIC Science & Technology

    2004-12-01

    the architectural style should be used and which one is appropriate for democracy. Then the argument will be made that rhetoric is the key to this...arts in a democracy. He argued that in the United States, democracy has in fact forced the decline of high art. His argument went that a decrease in...16 There are many arguments , though, that despite these stylistic limitations and concerns a style is essential in democratic architecture. What is

  6. Architecture of Vagal Motor Units Controlling Striated Muscle of Esophagus: Peripheral Elements Patterning Peristalsis?

    PubMed Central

    Powley, Terry L.; Mittal, Ravinder K.; Baronowsky, Elizabeth A.; Hudson, Cherie N.; Martin, Felecia N.; McAdams, Jennifer L.; Mason, Jacqueline K.; Phillips, Robert J.

    2013-01-01

    Little is known about the architecture of the vagal motor units that control esophageal striated muscle, in spite of the fact that these units are necessary, and responsible, for peristalsis. The present experiment was designed to characterize the motor neuron projection fields and terminal arbors forming esophageal motor units. Nucleus ambiguus compact formation neurons of the rat were labeled by bilateral intracranial injections of the anterograde tracer dextran biotin. After tracer transport, thoracic and abdominal esophagi were removed and prepared as whole mounts of muscle wall without mucosa or submucosa. Labeled terminal arbors of individual vagal motor neurons (n = 78) in the esophageal wall were inventoried, digitized and analyzed morphometrically. The size of individual vagal motor units innervating striated muscle, throughout thoracic and abdominal esophagus, averaged 52 endplates per motor neuron, a value indicative of fine motor control. A majority (77%) of the motor terminal arbors also issued one or more collateral branches that contacted neurons, including nitric oxide synthase-positive neurons, of local myenteric ganglia. Individual motor neuron terminal arbors co-innervated, or supplied endplates in tandem to, both longitudinal and circular muscle fibers in roughly similar proportions (i.e., two endplates to longitudinal for every three endplates to circular fibers). Both the observation that vagal motor unit collaterals project to myenteric ganglia and the fact that individual motor units co-innervate longitudinal and circular muscle layers are consistent with the hypothesis that elements contributing to peristaltic programming inhere, or are “hardwired,” in the peripheral architecture of esophageal motor units. PMID:24044976

  7. Architecture of vagal motor units controlling striated muscle of esophagus: peripheral elements patterning peristalsis?

    PubMed

    Powley, Terry L; Mittal, Ravinder K; Baronowsky, Elizabeth A; Hudson, Cherie N; Martin, Felecia N; McAdams, Jennifer L; Mason, Jacqueline K; Phillips, Robert J

    2013-12-01

    Little is known about the architecture of the vagal motor units that control esophageal striated muscle, in spite of the fact that these units are necessary, and responsible, for peristalsis. The present experiment was designed to characterize the motor neuron projection fields and terminal arbors forming esophageal motor units. Nucleus ambiguus compact formation neurons of the rat were labeled by bilateral intracranial injections of the anterograde tracer dextran biotin. After tracer transport, thoracic and abdominal esophagi were removed and prepared as whole mounts of muscle wall without mucosa or submucosa. Labeled terminal arbors of individual vagal motor neurons (n=78) in the esophageal wall were inventoried, digitized and analyzed morphometrically. The size of individual vagal motor units innervating striated muscle, throughout thoracic and abdominal esophagus, averaged 52 endplates per motor neuron, a value indicative of fine motor control. A majority (77%) of the motor terminal arbors also issued one or more collateral branches that contacted neurons, including nitric oxide synthase-positive neurons, of local myenteric ganglia. Individual motor neuron terminal arbors co-innervated, or supplied endplates in tandem to, both longitudinal and circular muscle fibers in roughly similar proportions (i.e., two endplates to longitudinal for every three endplates to circular fibers). Both the observation that vagal motor unit collaterals project to myenteric ganglia and the fact that individual motor units co-innervate longitudinal and circular muscle layers are consistent with the hypothesis that elements contributing to peristaltic programming inhere, or are "hardwired," in the peripheral architecture of esophageal motor units. © 2013.

  8. A Network Scheduling Model for Distributed Control Simulation

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot

    2016-01-01

    Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.

  9. Development and Flight Testing of an Adaptable Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.; Taylor, B. Douglas; Brett, Rube R.

    2003-01-01

    Development and testing of an adaptable wireless health-monitoring architecture for a vehicle fleet is presented. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained adaptable expert system. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate, and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear.

  10. Functional units and lead topologies: a hierarchical framework for observing and modeling the interplay of structures, storage dynamics and integral mass and energy flows in lower mesoscale catchments

    NASA Astrophysics Data System (ADS)

    Zehe, Erwin; Jackisch, Conrad; Blume, Theresa; Haßler, Sibylle; Allroggen, Niklas; Tronicke, Jens

    2013-04-01

    The CAOS Research Unit recently proposed a hierarchical classification scheme to subdivide a catchment into what we vaguely name classes of functional entities that puts the gradients driving mass and energy flows and their controls on top of the hierarchy and the arrangement of landscape attributes controlling flow resistances along these driving gradients (for instance soil types and apparent preferential pathways) at the second level. We name these functional entities lead topology classes, to highlight that they are characterized by a spatially ordered arrangement of landscape elements along a superordinate driving gradient. Our idea is that these lead topology classes have a distinct way how their structural and textural architecture controls the interplay of storage dynamics and integral response behavior that is typical for all members of a class, but is dissimilar between different classes. This implies that we might gain exemplary understanding of the typical dynamic behavior of the class, when thoroughly studying a few class members. We propose that the main integral catchment functions mass export and drainage, mass redistribution and storage, energy exchange with the atmosphere, as well as energy redistribution and storage - result from spatially organized interactions of processes within lead topologies that operate at different scale levels and partly dominate during different conditions. We distinguish: 1) Lead topologies controlling the land surface energy balance during radiation driven conditions at the plot/pedon scale level. In this case energy fluxes dominate and deplete a vertical temperature gradient that is build up by depleting a gradient in radiation fluxes. Water is a facilitator in this concert due to the high specific heat of vaporization. Slow vertical water fluxes in soil dominate, which are driven by vertical gradients in atmospheric water potential, chemical potential in the plant and in soil hydraulic potentials. 2) Lead topologies controlling fast drainage and generation stream flow during rainfall events at the hillslope scale level: Fast vertical and lateral mass fluxes dominate. They are driven by vertical and lateral gradients in pressure heads which build up by depleting the kinetic energy/velocity gradient of rainfall when it hits the ground or of vertical subsurface flows that "hit" a layer of low permeability. 3) Lead topologies controlling slow drainage and its supply, and thus creating memory at the catchment scale level: These are the groundwater system and the stream including the riparian zone. Permanent lateral water flows dominate that are driven by permanently active lateral gradients in pressure heads. Event scale stream flow generation and energy exchange with the atmospheric boundary layer are organized by the first two types of lead topologies, and their dominance changes with prevailing type of boundary conditions. We furthermore propose that lead topologies at the plot and the hillslope scale levels can be further subdivided into least functional entities we name call classes of elementary functional units. These classes of elementary functional units co-evolved being exposed to similar superordinate vertical gradients in a self-reinforcing manner. Being located either at the hilltop (sediment source area), midslope (sediment transport area) or hillfoot/riparian zone (sediment deposit area) they experienced similar weathering processes (past water, energy and nutrient flows), causing formation of similar soil texture in different horizons. This implies, depending on hillslope position and aspect, formation of distinct niches (with respect to water, nutrient and sun light availability) and thus "similar filters" to select distinct natural communities of animal and vegetation species. This in turn implies similarity with respect to formation of biotic flow networks (ant-, worm-, mole- and whole burrow systems, as well as root systems), which feeds back on vertical and lateral water/mass and thermal energy flows and so on. The idea is that members of EFU classes interact within lead topologies along a hierarchy of driving potential gradients and that these interactions are mediated by a hierarchy of connected flow networks like macropores, root networks or lateral pipe systems. We hypothesize that members of a functional unit class are similar with respect to the time invariant controls of the vertical gradients (soil hydraulic potentials, soil temperature, plant water potential) and the flow resistances in vertical direction (plant and soil albedo, soil hydraulic and thermal conductivity, vertical macropore networks). This implies that members of an EFU class behave functionally similar at least with respect to vertical flows of water and heat: we may gain exemplary understanding of the typical dynamic behavior of the class, by thoroughly studying a few class members. In the following we will thus use the term "elementary functional units, EFUs" and "elementary functional unit class, EFU class" as synonyms. We propose that a thorough understanding of the behavior of a few representatives of the most important EFU classes and of their interactions within a hierarchy of lead topology classes is sufficient for understanding and distributed modeling of event scale stream flow production under rainfall driven conditions and energy exchange with the atmosphere under radiation driven conditions. Good and not surprising news is that lead topologies controlling stream flow contribution, are an interconnected, ordered arrangement of the lead topologies that control energy exchange. We suggests that a combination of the related model approaches which simplified but physical based approaches to simulate dynamics in the saturated zone, riparian zone and the river network results in a structurally more adequate model framework for catchments of organized complexity. The feasibility of this concept is currently tested in the Attert catchment by setting up pseudo replica of field experiments and a distributed monitoring network in several members of first guess EFUs and superordinate lead topology classes. We combine geophysical and soil physical survey, artificial tracer tests and analysis of stable isotopes and ecological survey with distributed sensor clusters that permanently monitor meteorological variables, soil moisture and matric potential, piezometric heads etc. Within the proposed study we will present first results especially from the sensor clusters and geophysical survey. By using geostatistical methods we will work out to which extend members within a candidate EFU class are similar with respect to subsurface structures like depth to bedrock and soil properties as well as with respect to soil moisture/storage dynamics. Secondly, we will work out whether structurally similar hillslopes produce a similar event scale stream flow contribution, which of course is dependent on the degree of similarity of a) the rainfall forcing they receive and b) of their wetness state. To this end we will perform virtual experiments with the physically based model CATFLOW by perturbing behavioral model structures. These have been shown to portray system behavior and its architecture in a sense that they reproduce distributed observations of soil moisture and subsurface storm flow and represent the observed structural and textural signatures of soils, flow networks and vegetation.

  11. Evaluation of an Atmosphere Revitalization Subsystem for Deep Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Abney, Morgan B.; Conrad, Ruth E.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Knox, James C.; Newton, Robert L.; Parrish, Keith J.; Takada, Kevin C.; hide

    2015-01-01

    An Atmosphere Revitalization Subsystem (ARS) suitable for deployment aboard deep space exploration mission vehicles has been developed and functionally demonstrated. This modified ARS process design architecture was derived from the International Space Station's (ISS) basic ARS. Primary functions considered in the architecture include trace contaminant control, carbon dioxide removal, carbon dioxide reduction, and oxygen generation. Candidate environmental monitoring instruments were also evaluated. The process architecture rearranges unit operations and employs equipment operational changes to reduce mass, simplify, and improve the functional performance for trace contaminant control, carbon dioxide removal, and oxygen generation. Results from integrated functional demonstration are summarized and compared to the performance observed during previous testing conducted on an ISS-like subsystem architecture and a similarly evolved process architecture. Considerations for further subsystem architecture and process technology development are discussed.

  12. The architecture and artistic features of high-rise buildings in USSR and the United States of America during the first half of the twentieth century

    NASA Astrophysics Data System (ADS)

    Golovina, Svetlana; Oblasov, Yurii

    2018-03-01

    Skyscraper is a significant architectural structure in the world's largest cities. The appearance of a skyscraper in the city's architectural composition enhances its status, introduces dynamics into the shape of the city, modernizes the existing environment. Its architectural structure which can have both expressive triumphal forms and ascetic ones. For a deep understanding of the architecture of high-rise buildings must be considered by several criteria. Various approaches can be found in the competitive development of high-rise buildings in Moscow and the US cities in the middle of the twentieth century In this article we will consider how and on the basis of what the architectural decisions of high-rise buildings were formed.

  13. Energy Efficiency for Architectural Drafting Instructors.

    ERIC Educational Resources Information Center

    Scharmann, Larry, Ed.

    Intended primarily but not solely for use at the postsecondary level, this curriculum guide contains five units on energy efficiency that were designed to be incorporated into an existing program in architectural drafting. The following topics are examined: energy conservation awareness (residential energy use and audit procedures); residential…

  14. The Information-Seeking Habits of Architecture Faculty

    ERIC Educational Resources Information Center

    Campbell, Lucy

    2017-01-01

    This study examines results from a survey of architecture faculty across the United States investigating information-seeking behavior and perceptions of library services. Faculty were asked to rank information sources they used for research, teaching, and creativity within their discipline. Sources were ranked similarly across these activities,…

  15. Our Town

    ERIC Educational Resources Information Center

    McClure, Connie

    2010-01-01

    This article describes how the author teaches a fourth- and fifth-grade unit on architecture called the Art and Science of Planning Buildings. Rockville, Indiana has fine examples of architecture ranging from log cabins, classic Greek columns, Victorian houses, a mission-style theater, and Frank Lloyd Wright prairie-style homes. After reading…

  16. Low Band Gap Thiophene-Perylene Diimide Systems with Tunable Charge Transport Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, Ganapathy; Kale, Tejaswini S.; Keerthi, Ashok

    2011-01-07

    Perylenediimide-pentathiophene systems with varied architecture of thiophene units were synthesized. The photophysical, electrochemical, and charge transport behavior of the synthesized compounds were studied. Both molecules showed a low band gap of ~1.4 eV. Surprisingly, the molecule with pentathiophene attached via β-position to the PDI unit upon annealing showed a predominant hole mobility of 1 × 10 -4 cm 2 V -1 s -1 whereas the compound with branched pentathiophene attached via β-position showed an electron mobility of 9.8 × 10 -7 cm 2 V -1 s -1. This suggests that charge transport properties can be tuned by simply varying themore » architecture of pentathiophene units.« less

  17. Architecture for Multi-Technology Real-Time Location Systems

    PubMed Central

    Rodas, Javier; Barral, Valentín; Escudero, Carlos J.

    2013-01-01

    The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position. PMID:23435050

  18. Electrical anisotropy of gas hydrate-bearing sand reservoirs in the Gulf of Mexico

    USGS Publications Warehouse

    Cook, Anne E.; Anderson, Barbara I.; Rasmus, John; Sun, Keli; Li, Qiming; Collett, Timothy S.; Goldberg, David S.

    2012-01-01

    We present new results and interpretations of the electricalanisotropy and reservoir architecture in gashydrate-bearingsands using logging data collected during the Gulf of MexicoGasHydrate Joint Industry Project Leg II. We focus specifically on sandreservoirs in Hole Alaminos Canyon 21 A (AC21-A), Hole Green Canyon 955 H (GC955-H) and Hole Walker Ridge 313 H (WR313-H). Using a new logging-while-drilling directional resistivity tool and a one-dimensional inversion developed by Schlumberger, we resolve the resistivity of the current flowing parallel to the bedding, R| and the resistivity of the current flowing perpendicular to the bedding, R|. We find the sandreservoir in Hole AC21-A to be relatively isotropic, with R| and R| values close to 2 Ω m. In contrast, the gashydrate-bearingsandreservoirs in Holes GC955-H and WR313-H are highly anisotropic. In these reservoirs, R| is between 2 and 30 Ω m, and R| is generally an order of magnitude higher. Using Schlumberger's WebMI models, we were able to replicate multiple resistivity measurements and determine the formation resistivity the gashydrate-bearingsandreservoir in Hole WR313-H. The results showed that gashydrate saturations within a single reservoir unit are highly variable. For example, the sand units in Hole WR313-H contain thin layers (on the order of 10-100 cm) with varying gashydrate saturations between 15 and 95%. Our combined modeling results clearly indicate that the gashydrate-bearingsandreservoirs in Holes GC955-H and WR313-H are highly anisotropic due to varying saturations of gashydrate forming in thin layers within larger sand units.

  19. Privacy impact assessment in the design of transnational public health information systems: the BIRO project.

    PubMed

    Di Iorio, C T; Carinci, F; Azzopardi, J; Baglioni, V; Beck, P; Cunningham, S; Evripidou, A; Leese, G; Loevaas, K F; Olympios, G; Federici, M Orsini; Pruna, S; Palladino, P; Skeie, S; Taverner, P; Traynor, V; Benedetti, M Massi

    2009-12-01

    To foster the development of a privacy-protective, sustainable cross-border information system in the framework of a European public health project. A targeted privacy impact assessment was implemented to identify the best architecture for a European information system for diabetes directly tapping into clinical registries. Four steps were used to provide input to software designers and developers: a structured literature search, analysis of data flow scenarios or options, creation of an ad hoc questionnaire and conduction of a Delphi procedure. The literature search identified a core set of relevant papers on privacy (n = 11). Technicians envisaged three candidate system architectures, with associated data flows, to source an information flow questionnaire that was submitted to the Delphi panel for the selection of the best architecture. A detailed scheme envisaging an "aggregation by group of patients" was finally chosen, based upon the exchange of finely tuned summary tables. Public health information systems should be carefully engineered only after a clear strategy for privacy protection has been planned, to avoid breaching current regulations and future concerns and to optimise the development of statistical routines. The BIRO (Best Information Through Regional Outcomes) project delivers a specific method of privacy impact assessment that can be conveniently used in similar situations across Europe.

  20. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  1. The research of service provision based on service-oriented architecture for NGN

    NASA Astrophysics Data System (ADS)

    Jie, Yin; Nian, Zhou; Qian, Mao

    2007-11-01

    Service convergence is an important characteristic of NGN(Next Generation Networking). How to integrate the service capabilities of telecommunication network and Internet. At first, this article puts forward the concepts and characteristics of SOA (Service-Oriented Architecture) and Web Service, then discusses relationship between them. Secondly, combined with five kinds of Service Provision in NGN, A service platform architecture design of NGN and a service development mode based on SOA are brought up. At last, a specific example is analyzed with BPEL (Business Process Execution Language) in order to describe service development flow based on SOA for NGN.

  2. Constellation's Command, Control, Communications and Information (C3I) Architecture

    NASA Technical Reports Server (NTRS)

    Breidenthal, Julian C.

    2007-01-01

    Operations concepts are highly effective for: 1) Developing consensus; 2) Discovering stakeholder needs, goals, objectives; 3) Defining behavior of system components (especially emergent behaviors). An interoperability standard can provide an excellent lever to define the capabilities needed for system evolution. Two categories of architectures are needed in a program of this size are: 1) Generic - Needed for planning, design and construction standards; 2) Specific - Needed for detailed requirement allocations, interface specs. A wide variety of architectural views are needed to address stakeholder concerns, including: 1) Physical; 2) Information (structure, flow, evolution); 3) Processes (design, manufacturing, operations); 4) Performance; 5) Risk.

  3. Harnessing the Risk-Related Data Supply Chain: An Information Architecture Approach to Enriching Human System Research and Operations Knowledge

    NASA Technical Reports Server (NTRS)

    Buquo, Lynn E.; Johnson-Throop, Kathy A.

    2011-01-01

    An Information Architecture facilitates the understanding and, hence, harnessing of the human system risk-related data supply chain which enhances the ability to securely collect, integrate, and share data assets that improve human system research and operations. By mapping the risk-related data flow from raw data to useable information and knowledge (think of it as a data supply chain), the Human Research Program (HRP) and Space Life Science Directorate (SLSD) are building an information architecture plan to leverage their existing, and often shared, IT infrastructure.

  4. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    PubMed Central

    Revathy, M.; Saravanan, R.

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures. PMID:26065017

  5. Upper flow regime sheets, lenses and scour fills: Extending the range of architectural elements for fluvial sediment bodies

    NASA Astrophysics Data System (ADS)

    Fielding, Christopher R.

    2006-08-01

    Fluvial strata dominated internally by sedimentary structures of interpreted upper flow regime origin are moderately common in the rock record, yet their abundance is not appreciated and many examples may go unnoticed. A spectrum of sedimentary structures is recognised, all of which occur over a wide range of scale: 1. cross-bedding with humpback, sigmoidal and ultimately low-angle cross-sectional foreset geometries (interpreted as recording the transition from dune to upper plane bed bedform stability field), 2. planar/flat lamination with parting lineation, characteristic of the upper plane bed phase, 3. flat and low-angle lamination with minor convex-upward elements, characteristic of the transition from upper plane bed to antidune stability fields, 4. convex-upward bedforms, down- and up-palaeocurrent-dipping, low-angle cross-bedding and symmetrical drapes, interpreted as the product of antidunes, and 5. backsets terminating updip against an upstream-dipping erosion surface, interpreted as recording chute and pool conditions. In some fluvial successions, the entirety or substantial portions of channel sandstone bodies may be made up of such structures. These Upper Flow Regime Sheets, Lenses and Scour Fills (UFR) are defined herein as an extension of Miall's [Miall, A.D., 1985. Architectural-element analysis: a new method of facies analysis applied to fluvial deposits. Earth Sci. Rev. 22: 261-308.] Laminated Sand Sheets architectural element. Given the conditions that favour preservation of upper flow regime structures (rapid changes in flow strength), it is suggested that the presence of UFR elements in ancient fluvial successions may indicate sediment accumulation under the influence of a strongly seasonal palaeoclimate that involves a pronounced seasonal peak in precipitation and runoff.

  6. Evaluation of hardware costs of implementing PSK signal detection circuit based on "system on chip"

    NASA Astrophysics Data System (ADS)

    Sokolovskiy, A. V.; Dmitriev, D. D.; Veisov, E. A.; Gladyshev, A. B.

    2018-05-01

    The article deals with the choice of the architecture of digital signal processing units for implementing the PSK signal detection scheme. As an assessment of the effectiveness of architectures, the required number of shift registers and computational processes are used when implementing the "system on a chip" on the chip. A statistical estimation of the normalized code sequence offset in the signal synchronization scheme for various hardware block architectures is used.

  7. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  8. Flow/Damage Surfaces for Fiber-Reinforced Metals Having Different Periodic Microstructures

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Arnold, Steven M.; Iyer, Saiganesh K.

    1998-01-01

    Flow/damage surfaces can be defined in terms of stress, inelastic strain rate, and internal variables using a thermodynamics framework. A macroscale definition relevant to thermodynamics and usable in an experimental program is employed to map out surfaces of constant inelastic power in various stress planes. The inelastic flow of a model silicon carbide/ titanium composite system having rectangular, hexagonal, and square diagonal fiber packing arrays subjected to biaxial stresses is quantified by flow/damage surfaces that are determined numerically from micromechanics, using both finite element analysis and the generalized method of cells. Residual stresses from processing are explicitly included and damage in the form of fiber-matrix debonding under transverse tensile and/or shear loading is represented by a simple interface model. The influence of microstructural architecture is largest whenever fiber-matrix debonding is not an issue; for example in the presence of transverse compressive stresses. Additionally, as the fiber volume fraction increases, so does the effect of microstructural architecture. With regard to the micromechanics analysis, the overall inelastic flow predicted by the generalized method of cells is in excellent agreement with that predicted using a large number of displacement-based finite elements.

  9. Flow/Damage Surfaces for Fiber-Reinforced Metals having Different Periodic Microstructures

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Arnold, Steven M.; Iyer, Saiganesh K.

    1998-01-01

    Flow/damage surfaces can be defined in terms of stress, inelastic strain rate, and internal variables using a thermodynamics framework. A macroscale definition relevant to thermodynamics and usable in an experimental program is employed to map out surfaces of constant inelastic power in various stress planes. The inelastic flow of a model silicon carbide/ titanium composite system having rectangular, hexagonal, and square diagonal fiber packing, arrays subjected to biaxial stresses is quantified by flow/damage surfaces that are determined numerically from micromechanics. using both finite element analysis and the generalized method of cells. Residual stresses from processing are explicitly included and damage in the form of fiber-matrix debonding under transverse tensile and/or shear loading is represented by a simple interface model. The influence of microstructural architecture is largest whenever fiber-matrix debonding is not an issue, for example in the presence of transverse compressive stresses. Additionally, as the fiber volume fraction increases, so does the effect of microstructural architecture. With regard to the micromechanics analysis, the overall inelastic flow predicted by the generalized method of cells is in excellent agreement with that predicted using a large number of displacement-based finite elements.

  10. Silicon Nanophotonics for Many-Core On-Chip Networks

    NASA Astrophysics Data System (ADS)

    Mohamed, Moustafa

    Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.

  11. Architecture and sedimentary processes on the mid-Norwegian continental slope: A 2.7 Myr record from extensive seismic evidence

    NASA Astrophysics Data System (ADS)

    Montelli, A.; Dowdeswell, J. A.; Ottesen, D.; Johansen, S. E.

    2018-07-01

    Quaternary architectural evolution and sedimentary processes on the mid-Norwegian continental slope are investigated using margin-wide three- and two-dimensional seismic datasets. Of ∼100,000 km3 sediments delivered to the mid-Norwegian shelf and slope over the Quaternary, ∼75,000 km3 comprise the slope succession. The structural high of the Vøring Plateau, characterised by initially low (∼1-2°) slope gradients and reduced accommodation space, exerted a strong control over the long-term architectural evolution of the margin. Slope sediment fluxes were higher on the Vøring Plateau area, increasing up to ∼32 km3 ka-1 during the middle Pleistocene, when fast-flowing ice streams advanced to the palaeo-shelf edge. Resulted in a more rapid slope progradation on the Vøring Plateau, these rates of sediment delivery are high compared to the maximum of ∼7 km3 ka-1 in the adjacent sectors of the slope, characterised by steeper slope (∼3-5°), more available accommodation space and smaller or no palaeo-ice streams on the adjacent shelves. In addition to the broad-scale architectural evolution, identification of more than 300 buried slope landforms provides an unprecedented level of detailed, process-based palaeoenvironmental reconstruction. Channels dominate the Early Pleistocene record (∼2.7-0.8 Ma), during which glacimarine sedimentation on the slope was influenced by dense bottom-water flow and turbidity currents. Morphologic signature of glacigenic debris-flows appear within the Middle-Late Pleistocene (∼0.8-0 Ma) succession. Their abundance increases towards Late Pleistocene, marking a decreasing role for channelized turbidity currents and dense water flows. This broad-scale palaeo-environmental shift coincides with the intensification of Northern Hemispheric glaciations, highlighting first-order climate control on the sedimentary processes in high-latitude continental slopes.

  12. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  13. Investigation of the flow structure in thin polymer films using 3D µPTV enhanced by GPU

    NASA Astrophysics Data System (ADS)

    Cavadini, Philipp; Weinhold, Hannes; Tönsmann, Max; Chilingaryan, Suren; Kopmann, Andreas; Lewkowicz, Alexander; Miao, Chuan; Scharfer, Philip; Schabel, Wilhelm

    2018-04-01

    To understand the effects of inhomogeneous drying on the quality of polymer coatings, an experimental setup to resolve the occurring flow field throughout the drying film has been developed. Deconvolution microscopy is used to analyze the flow field in 3D and time. Since the dimension of the spatial component in the direction of the line-of-sight is limited compared to the lateral components, a multi-focal approach is used. Here, the beam of light is equally distributed on up to five cameras using cubic beam splitters. Adding a meniscus lens between each pair of camera and beam splitter and setting different distances between each camera and its meniscus lens creates multi-focality and allows one to increase the depth of the observed volume. Resolving the spatial component in the line-of-sight direction is based on analyzing the point spread function. The analysis of the PSF is computational expensive and introduces a high complexity compared to traditional particle image velocimetry approaches. A new algorithm tailored to the parallel computing architecture of recent graphics processing units has been developed. The algorithm is able to process typical images in less than a second and has further potential to realize online analysis in the future. As a prove of principle, the flow fields occurring in thin polymer solutions drying at ambient conditions and at boundary conditions that force inhomogeneous drying are presented.

  14. Architecture, persistence and dissolution of a 20 to 45 year old trichloroethene DNAPL source zone.

    PubMed

    Rivett, Michael O; Dearden, Rachel A; Wealthall, Gary P

    2014-12-01

    A detailed field-scale investigation of processes controlling the architecture, persistence and dissolution of a 20 to 45year old trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) source zone located within a heterogeneous sand/gravel aquifer at a UK industrial site is presented. The source zone was partially enclosed by a 3-sided cell that allowed detailed longitudinal/fence transect monitoring along/across a controlled streamtube of flow induced by an extraction well positioned at the cell closed end. Integrated analysis of high-resolution DNAPL saturation (Sn) (from cores), dissolved-phase plume concentration (from multilevel samplers), tracer test and permeability datasets was undertaken. DNAPL architecture was determined from soil concentration data using partitioning calculations. DNAPL threshold soil concentrations and low Sn values calculated were sensitive to sorption assumptions. An outcome of this was the uncertainty in demarcation of secondary source zone diffused and sorbed mass that is distinct from trace amounts of low Sn DNAPL mass. The majority of source mass occurred within discrete lenses or pools of DNAPL associated with low permeability geological units. High residual saturation (Sn>10-20%) and pools (Sn>20%) together accounted for almost 40% of the DNAPL mass, but only 3% of the sampled source volume. High-saturation DNAPL lenses/pools were supported by lower permeability layers, but with DNAPL still primarily present within slightly more permeable overlying units. These lenses/pools exhibited approximately linearly declining Sn profiles with increasing elevation ascribed to preferential dissolution of the uppermost DNAPL. Bi-component partitioning calculations on soil samples confirmed that the dechlorination product cDCE (cis-dichloroethene) was accumulating in the TCE DNAPL. Estimated cDCE mole fractions in the DNAPL increased towards the DNAPL interface with the uppermost mole fraction of 0.04 comparable to literature laboratory data. DNAPL dissolution yielded heterogeneous dissolved-phase plumes of TCE and its dechlorination products that exhibited orders of magnitude local concentration variation. TCE solubility concentrations were relatively localised, but coincident with high saturation DNAPL lens source areas. Biotic dechlorination in the source zone area, however, caused cDCE to be the dominant dissolved-phase plume. The conservative tracer test usefully confirmed the continuity of a permeable gravel unit at depth through the source zone. Although this unit offered significant opportunity for DNAPL bypassing and decreased timeframes for dechlorination, it still transmitted a significant proportion of the contaminant flux. This was attributed to dissolution of DNAPL-mudstone aquitard associated sources at the base of the continuous gravel as well as contaminated groundwater from surrounding less permeable sand and gravel horizons draining into this permeable conduit. The cell extraction well provided an integrated metric of source zone dissolution yielding a mean concentration of around 45% TCE solubility (taking into account dechlorination) that was equivalent to a DNAPL mass removal rate of 0.4tonnes per annum over a 16m(2) cell cross sectional area of flow. This is a significant flux considering the source age and observed occurrence of much of the source mass within discrete lenses/pools. We advocate the need for further detailed field-scale studies on old DNAPL source zones that better resolve persistent pool/lens features and are of prolonged duration to assess the ageing of source zones. Such studies would further underpin the application of more surgical remediation technologies. Copyright © 2014. Published by Elsevier B.V.

  15. Architecture, persistence and dissolution of a 20 to 45 year old trichloroethene DNAPL source zone

    NASA Astrophysics Data System (ADS)

    Rivett, Michael O.; Dearden, Rachel A.; Wealthall, Gary P.

    2014-12-01

    A detailed field-scale investigation of processes controlling the architecture, persistence and dissolution of a 20 to 45 year old trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) source zone located within a heterogeneous sand/gravel aquifer at a UK industrial site is presented. The source zone was partially enclosed by a 3-sided cell that allowed detailed longitudinal/fence transect monitoring along/across a controlled streamtube of flow induced by an extraction well positioned at the cell closed end. Integrated analysis of high-resolution DNAPL saturation (Sn) (from cores), dissolved-phase plume concentration (from multilevel samplers), tracer test and permeability datasets was undertaken. DNAPL architecture was determined from soil concentration data using partitioning calculations. DNAPL threshold soil concentrations and low Sn values calculated were sensitive to sorption assumptions. An outcome of this was the uncertainty in demarcation of secondary source zone diffused and sorbed mass that is distinct from trace amounts of low Sn DNAPL mass. The majority of source mass occurred within discrete lenses or pools of DNAPL associated with low permeability geological units. High residual saturation (Sn > 10-20%) and pools (Sn > 20%) together accounted for almost 40% of the DNAPL mass, but only 3% of the sampled source volume. High-saturation DNAPL lenses/pools were supported by lower permeability layers, but with DNAPL still primarily present within slightly more permeable overlying units. These lenses/pools exhibited approximately linearly declining Sn profiles with increasing elevation ascribed to preferential dissolution of the uppermost DNAPL. Bi-component partitioning calculations on soil samples confirmed that the dechlorination product cDCE (cis-dichloroethene) was accumulating in the TCE DNAPL. Estimated cDCE mole fractions in the DNAPL increased towards the DNAPL interface with the uppermost mole fraction of 0.04 comparable to literature laboratory data. DNAPL dissolution yielded heterogeneous dissolved-phase plumes of TCE and its dechlorination products that exhibited orders of magnitude local concentration variation. TCE solubility concentrations were relatively localised, but coincident with high saturation DNAPL lens source areas. Biotic dechlorination in the source zone area, however, caused cDCE to be the dominant dissolved-phase plume. The conservative tracer test usefully confirmed the continuity of a permeable gravel unit at depth through the source zone. Although this unit offered significant opportunity for DNAPL bypassing and decreased timeframes for dechlorination, it still transmitted a significant proportion of the contaminant flux. This was attributed to dissolution of DNAPL-mudstone aquitard associated sources at the base of the continuous gravel as well as contaminated groundwater from surrounding less permeable sand and gravel horizons draining into this permeable conduit. The cell extraction well provided an integrated metric of source zone dissolution yielding a mean concentration of around 45% TCE solubility (taking into account dechlorination) that was equivalent to a DNAPL mass removal rate of 0.4 tonnes per annum over a 16 m2 cell cross sectional area of flow. This is a significant flux considering the source age and observed occurrence of much of the source mass within discrete lenses/pools. We advocate the need for further detailed field-scale studies on old DNAPL source zones that better resolve persistent pool/lens features and are of prolonged duration to assess the ageing of source zones. Such studies would further underpin the application of more surgical remediation technologies.

  16. The Amazing Labyrinth: An Ancient-Modern Humanities Unit

    ERIC Educational Resources Information Center

    Ladensack, Carl

    1973-01-01

    The image of the labyrinth from mythology can find modern day parallelisms in architecture, art, music, and literature--all of which contributes to a humanities unit combining the old with the new. (MM)

  17. Influence of macromolecular architecture on necking in polymer extrusion film casting process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pol, Harshawardhan; Banik, Sourya; Azad, Lal Busher

    2015-05-22

    Extrusion film casting (EFC) is an important polymer processing technique that is used to produce several thousand tons of polymer films/coatings on an industrial scale. In this research, we are interested in understanding quantitatively how macromolecular chain architecture (for example long chain branching (LCB) or molecular weight distribution (MWD or PDI)) influences the necking and thickness distribution of extrusion cast films. We have used different polymer resins of linear and branched molecular architecture to produce extrusion cast films under controlled experimental conditions. The necking profiles of the films were imaged and the velocity profiles during EFC were monitored using particlemore » tracking velocimetry (PTV) technique. Additionally, the temperature profiles were captured using an IR thermography and thickness profiles were calculated. The experimental results are compared with predictions of one-dimensional flow model of Silagy et al{sup 1} wherein the polymer resin rheology is modeled using molecular constitutive equations such as the Rolie-Poly (RP) and extended Pom Pom (XPP). We demonstrate that the 1-D flow model containing the molecular constitutive equations provides new insights into the role of macromolecular chain architecture on film necking.{sup 1}D. Silagy, Y. Demay, and J-F. Agassant, Polym. Eng. Sci., 36, 2614 (1996)« less

  18. Ground Penetrating Radar Imaging of Ancient Clastic Deposits: A Tool for Three-Dimensional Outcrop Studies

    NASA Astrophysics Data System (ADS)

    Akinpelu, Oluwatosin Caleb

    The growing need for better definition of flow units and depositional heterogeneities in petroleum reservoirs and aquifers has stimulated a renewed interest in outcrop studies as reservoir analogues in the last two decades. Despite this surge in interest, outcrop studies remain largely two-dimensional; a major limitation to direct application of outcrop knowledge to the three dimensional heterogeneous world of subsurface reservoirs. Behind-outcrop Ground Penetrating Radar (GPR) imaging provides high-resolution geophysical data, which when combined with two dimensional architectural outcrop observation, becomes a powerful interpretation tool. Due to the high resolution, non-destructive and non-invasive nature of the GPR signal, as well as its reflection-amplitude sensitivity to shaly lithologies, three-dimensional outcrop studies combining two dimensional architectural element data and behind-outcrop GPR imaging hold significant promise with the potential to revolutionize outcrop studies the way seismic imaging changed basin analysis. Earlier attempts at GPR imaging on ancient clastic deposits were fraught with difficulties resulting from inappropriate field techniques and subsequent poorly-informed data processing steps. This project documents advances in GPR field methodology, recommends appropriate data collection and processing procedures and validates the value of integrating outcrop-based architectural-element mapping with GPR imaging to obtain three dimensional architectural data from outcrops. Case studies from a variety of clastic deposits: Whirlpool Formation (Niagara Escarpment), Navajo Sandstone (Moab, Utah), Dunvegan Formation (Pink Mountain, British Columbia), Chinle Formation (Southern Utah) and St. Mary River Formation (Alberta) demonstrate the usefulness of this approach for better interpretation of outcrop scale ancient depositional processes and ultimately as a tool for refining existing facies models, as well as a predictive tool for subsurface reservoir modelling. While this approach is quite promising for detailed three-dimensional outcrop studies, it is not an all-purpose panacea; thick overburden, poor antenna-ground coupling in rough terrains typical of outcrops, low penetration and rapid signal attenuation in mudstone and diagenetic clay- rich deposits often limit the prospects of this novel technique.

  19. Construction of hybrid photosynthetic units using peripheral and core antennae from two different species of photosynthetic bacteria: detection of the energy transfer from bacteriochlorophyll a in LH2 to bacteriochlorophyll b in LH1.

    PubMed

    Fujii, Ritsuko; Shimonaka, Shozo; Uchida, Naoko; Gardiner, Alastair T; Cogdell, Richard J; Sugisaki, Mitsuru; Hashimoto, Hideki

    2008-01-01

    Typical purple bacterial photosynthetic units consist of supra-molecular arrays of peripheral (LH2) and core (LH1-RC) antenna complexes. Recent atomic force microscopy pictures of photosynthetic units in intact membranes have revealed that the architecture of these units is variable (Scheuring et al. (2005) Biochim Bhiophys Acta 1712:109-127). In this study, we describe methods for the construction of heterologous photosynthetic units in lipid-bilayers from mixtures of purified LH2 (from Rhodopseudomonas acidophila) and LH1-RC (from Rhodopseudomonas viridis) core complexes. The architecture of these reconstituted photosynthetic units can be varied by controlling ratio of added LH2 to core complexes. The arrangement of the complexes was visualized by electron-microscopy in combination with Fourier analysis. The regular trigonal array of the core complexes seen in the native photosynthetic membrane could be regenerated in the reconstituted membranes by temperature cycling. In the presence of added LH2 complexes, this trigonal symmetry was replaced with orthorhombic symmetry. The small lattice lengths for the latter suggest that the constituent unit of the orthorhombic lattice is the LH2. Fluorescence and fluorescence-excitation spectroscopy was applied to the set of the reconstituted membranes prepared with various proportions of LH2 to core complexes. Remarkably, even though the LH2 complexes contain bacteriochlorophyll a, and the core complexes contain bacteriochlorophyll b, it was possible to demonstrate energy transfer from LH2 to the core complexes. These experiments provide a first step along the path toward investigating how changing the architecture of purple bacterial photosynthetic units affects the overall efficiency of light-harvesting.

  20. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  1. An integrated autonomous rendezvous and docking system architecture using Centaur modern avionics

    NASA Technical Reports Server (NTRS)

    Nelson, Kurt

    1991-01-01

    The avionics system for the Centaur upper stage is in the process of being modernized with the current state-of-the-art in strapdown inertial guidance equipment. This equipment includes an integrated flight control processor with a ring laser gyro based inertial guidance system. This inertial navigation unit (INU) uses two MIL-STD-1750A processors and communicates over the MIL-STD-1553B data bus. Commands are translated into load activation through a Remote Control Unit (RCU) which incorporates the use of solid state relays. Also, a programmable data acquisition system replaces separate multiplexer and signal conditioning units. This modern avionics suite is currently being enhanced through independent research and development programs to provide autonomous rendezvous and docking capability using advanced cruise missile image processing technology and integrated GPS navigational aids. A system concept was developed to combine these technologies in order to achieve a fully autonomous rendezvous, docking, and autoland capability. The current system architecture and the evolution of this architecture using advanced modular avionics concepts being pursued for the National Launch System are discussed.

  2. Learning to classify in large committee machines

    NASA Astrophysics Data System (ADS)

    O'kane, Dominic; Winther, Ole

    1994-10-01

    The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.

  3. Unit cell-based computer-aided manufacturing system for tissue engineering.

    PubMed

    Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo

    2012-03-01

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernest A. Mancini

    The University of Alabama, in cooperation with Texas A&M University, McGill University, Longleaf Energy Group, Strago Petroleum Corporation, and Paramount Petroleum Company, has undertaken an integrated, interdisciplinary geoscientific and engineering research project. The project is designed to characterize and model reservoir architecture, pore systems and rock-fluid interactions at the pore to field scale in Upper Jurassic Smackover reef and carbonate shoal reservoirs associated with varying degrees of relief on pre-Mesozoic basement paleohighs in the northeastern Gulf of Mexico. The project effort includes the prediction of fluid flow in carbonate reservoirs through reservoir simulation modeling which utilizes geologic reservoir characterization andmore » modeling and the prediction of carbonate reservoir architecture, heterogeneity and quality through seismic imaging. The primary goal of the project is to increase the profitability, producibility and efficiency of recovery of oil from existing and undiscovered Upper Jurassic fields characterized by reef and carbonate shoals associated with pre-Mesozoic basement paleohighs. Geoscientific reservoir property, geophysical seismic attribute, petrophysical property, and engineering property characterization has shown that reef (thrombolite) and shoal reservoir lithofacies developed on the flanks of high-relief crystalline basement paleohighs (Vocation Field example) and on the crest and flanks of low-relief crystalline basement paleohighs (Appleton Field example). The reef thrombolite lithofacies have higher reservoir quality than the shoal lithofacies due to overall higher permeabilities and greater interconnectivity. Thrombolite dolostone flow units, which are dominated by dolomite intercrystalline and vuggy pores, are characterized by a pore system comprised of a higher percentage of large-sized pores and larger pore throats. Rock-fluid interactions (diagenesis) studies have shown that although the primary control on reservoir architecture and geographic distribution of Smackover reservoirs is the fabric and texture of the depositional lithofacies, diagenesis (chiefly dolomitization) is a significant factor that preserves and enhances reservoir quality. The evaporative pumping mechanism is favored to explain the dolomitization of the thrombolite doloboundstone and dolostone reservoir flow units at Appleton and Vocation Fields. Geologic modeling, reservoir simulation, and the testing and applying the resulting integrated geologic-engineering models have shown that little oil remains to be recovered at Appleton Field and a significant amount of oil remains to be recovered at Vocation Field through a strategic infill drilling program. The drive mechanisms for primary production in Appleton and Vocation Fields remain effective; therefore, the initiation of a pressure maintenance program or enhanced recovery project is not required at this time. The integrated geologic-engineering model developed for a low-relief paleohigh (Appleton Field) was tested for three scenarios involving the variables of present-day structural elevation and the presence/absence of potential reef thrombolite lithofacies. In each case, the predictions based upon the model were correct. From this modeling, the characteristics of the ideal prospect in the basement ridge play include a low-relief paleohigh associated with dendroidal/chaotic thrombolite doloboundstone and dolostone that has sufficient present-day structural relief so that these carbonates rest above the oil-water contact. Such a prospect was identified from the modeling, and it is located northwest of well Permit No. 3854B (Appleton Field) and south of well No. Permit No.11030B (Northwest Appleton Field).« less

  5. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    PubMed Central

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2012-01-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  6. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  7. A model for the electronic support of practice-based research networks.

    PubMed

    Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D

    2012-01-01

    The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the development of electronic support for essential PBRN research activities.

  8. Access control mechanism of wireless gateway based on open flow

    NASA Astrophysics Data System (ADS)

    Peng, Rong; Ding, Lei

    2017-08-01

    In order to realize the access control of wireless gateway and improve the access control of wireless gateway devices, an access control mechanism of SDN architecture which is based on Open vSwitch is proposed. The mechanism utilizes the features of the controller--centralized control and programmable. Controller send access control flow table based on the business logic. Open vSwitch helps achieve a specific access control strategy based on the flow table.

  9. Using Curriculum Architecture in Workplace Learning

    ERIC Educational Resources Information Center

    Kaufmann, Ken

    2005-01-01

    While learning is often designed and executed as if it stood alone, it rarely exists in isolation. If more than one learning event is offered, a relationship between units exists and should be defined. This relationship calls for an architecture of curriculum that defines audience, content, and delivery within a context of performance. Curriculum…

  10. [Architecture, budget and dignity].

    PubMed

    Morel, Etienne

    2012-01-01

    Drawing on its dynamic strengths, a psychiatric unit develops various projects and care techniques. In this framework, the institute director must make a number of choices with regard to architecture. Why renovate the psychiatry building? What financial investments are required? What criteria should be followed? What if the major argument was based on the respect of the patient's dignity?

  11. Microchannel cross load array with dense parallel input

    DOEpatents

    Swierkowski, Stefan P.

    2004-04-06

    An architecture or layout for microchannel arrays using T or Cross (+) loading for electrophoresis or other injection and separation chemistry that are performed in microfluidic configurations. This architecture enables a very dense layout of arrays of functionally identical shaped channels and it also solves the problem of simultaneously enabling efficient parallel shapes and biasing of the input wells, waste wells, and bias wells at the input end of the separation columns. One T load architecture uses circular holes with common rows, but not columns, which allows the flow paths for each channel to be identical in shape, using multiple mirror image pieces. Another T load architecture enables the access hole array to be formed on a biaxial, collinear grid suitable for EDM micromachining (square holes), with common rows and columns.

  12. Optical systolic solutions of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Neuman, C. P.; Casasent, D.

    1984-01-01

    The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.

  13. Bricklaying Curriculum: Advanced Bricklaying Techniques. Instructional Materials. Revised.

    ERIC Educational Resources Information Center

    Turcotte, Raymond J.; Hendrix, Laborn J.

    This curriculum guide is designed to assist bricklaying instructors in providing performance-based instruction in advanced bricklaying. Included in the first section of the guide are units on customized or architectural masonry units; glass block; sills, lintels, and copings; and control (expansion) joints. The next two units deal with cut,…

  14. 22 CFR 1103.150 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MEXICO, UNITED STATES SECTION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY INTERNATIONAL BOUNDARY AND WATER COMMISSION, UNITED STATES AND MEXICO, UNITED STATES... extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any...

  15. 22 CFR 1103.150 - Program accessibility: Existing facilities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... MEXICO, UNITED STATES SECTION ENFORCEMENT OF NONDISCRIMINATION ON THE BASIS OF HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY INTERNATIONAL BOUNDARY AND WATER COMMISSION, UNITED STATES AND MEXICO, UNITED STATES... extent compelled by the Architectural Barriers Act of 1968, as amended (42 U.S.C. 4151-4157), and any...

  16. Subsurface architecture of Las Bombas volcano circular structure (Southern Mendoza, Argentina) from geophysical studies

    NASA Astrophysics Data System (ADS)

    Prezzi, Claudia; Risso, Corina; Orgeira, María Julia; Nullo, Francisco; Sigismondi, Mario E.; Margonari, Liliana

    2017-08-01

    The Plio-Pleistocene Llancanelo volcanic field is located in the south-eastern region of the province of Mendoza, Argentina. This wide back-arc lava plateau, with hundreds of monogenetic pyroclastic cones, covers a large area behind the active Andean volcanic arc. Here we focus on the northern Llancanelo volcanic field, particularly in Las Bombas volcano. Las Bombas volcano is an eroded, but still recognizable, scoria cone located in a circular depression surrounded by a basaltic lava flow, suggesting that Las Bombas volcano was there when the lava flow field formed and, therefore, the lava flow engulfed it completely. While this explanation seems reasonable, the common presence of similar landforms in this part of the field justifies the need to establish correctly the stratigraphic relationship between lava flow fields and these circular depressions. The main purpose of this research is to investigate Las Bombas volcano 3D subsurface architecture by means of geophysical methods. We carried out a paleomagnetic study and detailed topographic, magnetic and gravimetric land surveys. Magnetic anomalies of normal and reverse polarity and paleomagnetic results point to the occurrence of two different volcanic episodes. A circular low Bouguer anomaly was detected beneath Las Bombas scoria cone indicating the existence of a mass deficit. A 3D forward gravity model was constructed, which suggests that the mass deficit would be related to the presence of fracture zones below Las Bombas volcano cone, due to sudden degassing of younger magma beneath it, or to a single phreatomagmatic explosion. Our results provide new and detailed information about Las Bombas volcano subsurface architecture.

  17. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  18. Mapping Flows onto Networks to Optimize Organizational Processes

    DTIC Science & Technology

    2005-01-01

    And G . Porter, “Assessments of Simulated Performance of Alternative Architectures for Command and Control: The Role of Coordination”, Proceedings of...the 1999 Command & Control Research & Technology Symposium, NWC, Newport, RI, June 1999, pp. 123-143. [Iverson95] M. Iverson, F. Ozguner, G . Follen...Technology Symposium, NPS, Monterrey, CA, June, 2002. [Wu88] Min-You Wu, D. Gajski . “A Programming Aid for Hypercube Architectures.” The Journal of Supercomputing, 2(1988), pp. 349-372.

  19. Interrelationships of petiole air canal architecture, water depth and convective air flow in Nymphaea odorata (Nymphaeaceae)

    USDA-ARS?s Scientific Manuscript database

    Premise of the study--Nymphaea odorata grows in water up to 2 m deep, producing fewer, larger leaves in deeper water. This species has a convective flow system that moves gases from younger leaves through submerged parts to older leaves, aerating submerged parts. Petiole air canals are in the conv...

  20. La Hispanidad en los Estados Unidos (Spanish Influence in the United States)

    ERIC Educational Resources Information Center

    Da Silva, Zenia Sacks

    1975-01-01

    This paper recounts a brief history of Spanish exploration in the territory of the United States and surveys Spanish influence in industry, agriculture, foods, architecture and vocabulary. (Text is in Spanish.) (CK)

  1. Unsupervised segmentation with dynamical units.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R

    2008-01-01

    In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.

  2. Multistage WDM access architecture employing cascaded AWGs

    NASA Astrophysics Data System (ADS)

    El-Nahal, F. I.; Mears, R. J.

    2009-03-01

    Here we propose passive/active arrayed waveguide gratings (AWGs) with enhanced performance for system applications mainly in novel access architectures employing cascaded AWG technology. Two technologies were considered to achieve space wavelength switching in these networks. Firstly, a passive AWG with semiconductor optical amplifiers array, and secondly, an active AWG. Active AWG is an AWG with an array of phase modulators on its arrayed-waveguides section, where a programmable linear phase-profile or a phase hologram is applied across the arrayed-waveguide section. This results in a wavelength shift at the output section of the AWG. These architectures can address up to 6912 customers employing only 24 wavelengths, coarsely separated by 1.6 nm. Simulation results obtained here demonstrate that cascaded AWGs access architectures have a great potential in future local area networks. Furthermore, they indicate for the first time that active AWGs architectures are more efficient in routing signals to the destination optical network units than passive AWG architectures.

  3. Reliability analysis of multicellular system architectures for low-cost satellites

    NASA Astrophysics Data System (ADS)

    Erlank, A. O.; Bridges, C. P.

    2018-06-01

    Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.

  4. Hierarchical micro-architectures of electrodes for energy storage

    NASA Astrophysics Data System (ADS)

    Yue, Yuan; Liang, Hong

    2015-06-01

    The design of electrodes for the electrochemical energy storage devices, particularly Lithium ion batteries (LIBs) and Supercapacitors (SCs), has extraordinary importance in optimization of electrochemical performance. Regardless of the materials used, the architecture of electrodes is crucial for charge transport efficiency and electrochemical interactions. This report provides a critical review of the prototype architectural design and micro- and nano-material properties designated to electrodes of LIBs and SCs. An alternative classification criterion is proposed that divides reported hierarchical architectures into two categories: aligned and unaligned structures. The structures were evaluated and it was found that the aligned architectures are superior to the unaligned in the following characteristics: 1) highly-organized charger pathways, 2) tunable interspaces between architecture units, and 3) good electric-contacted current collectors prepared along with electrodes. Based on these findings, challenges and potential routes to resolve those are provided for future development.

  5. Numerical and experimental characterization of a novel modular passive micromixer.

    PubMed

    Pennella, Francesco; Rossi, Massimiliano; Ripandelli, Simone; Rasponi, Marco; Mastrangelo, Francesco; Deriu, Marco A; Ridolfi, Luca; Kähler, Christian J; Morbiducci, Umberto

    2012-10-01

    This paper reports a new low-cost passive microfluidic mixer design, based on a replication of identical mixing units composed of microchannels with variable curvature (clothoid) geometry. The micromixer presents a compact and modular architecture that can be easily fabricated using a simple and reliable fabrication process. The particular clothoid-based geometry enhances the mixing by inducing transversal secondary flows and recirculation effects. The role of the relevant fluid mechanics mechanisms promoting the mixing in this geometry were analysed using computational fluid dynamics (CFD) for Reynolds numbers ranging from 1 to 110. A measure of mixing potency was quantitatively evaluated by calculating mixing efficiency, while a measure of particle dispersion was assessed through the lacunarity index. The results show that the secondary flow arrangement and recirculation effects are able to provide a mixing efficiency equal to 80 % at Reynolds number above 70. In addition, the analysis of particles distribution promotes the lacunarity as powerful tool to quantify the dispersion of fluid particles and, in turn, the overall mixing. On fabricated micromixer prototypes the microscopic-Laser-Induced-Fluorescence (μLIF) technique was applied to characterize mixing. The experimental results confirmed the mixing potency of the microdevice.

  6. Modeling of Kidney Hemodynamics: Probability-Based Topology of an Arterial Network.

    PubMed

    Postnov, Dmitry D; Marsh, Donald J; Postnov, Dmitry E; Braunstein, Thomas H; Holstein-Rathlou, Niels-Henrik; Martens, Erik A; Sosnovtseva, Olga

    2016-07-01

    Through regulation of the extracellular fluid volume, the kidneys provide important long-term regulation of blood pressure. At the level of the individual functional unit (the nephron), pressure and flow control involves two different mechanisms that both produce oscillations. The nephrons are arranged in a complex branching structure that delivers blood to each nephron and, at the same time, provides a basis for an interaction between adjacent nephrons. The functional consequences of this interaction are not understood, and at present it is not possible to address this question experimentally. We provide experimental data and a new modeling approach to clarify this problem. To resolve details of microvascular structure, we collected 3D data from more than 150 afferent arterioles in an optically cleared rat kidney. Using these results together with published micro-computed tomography (μCT) data we develop an algorithm for generating the renal arterial network. We then introduce a mathematical model describing blood flow dynamics and nephron to nephron interaction in the network. The model includes an implementation of electrical signal propagation along a vascular wall. Simulation results show that the renal arterial architecture plays an important role in maintaining adequate pressure levels and the self-sustained dynamics of nephrons.

  7. Small organic molecule based flow battery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huskinson, Brian; Marshak, Michael; Aziz, Michael J.

    The invention provides an electrochemical cell based on a new chemistry for a flow battery for large scale, e.g., gridscale, electrical energy storage. Electrical energy is stored chemically at an electrochemical electrode by the protonation of small organic molecules called quinones to hydroquinones. The proton is provided by a complementary electrochemical reaction at the other electrode. These reactions are reversed to deliver electrical energy. A flow battery based on this concept can operate as a closed system. The flow battery architecture has scaling advantages over solid electrode batteries for large scale energy storage.

  8. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  9. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  10. Leadership Stability in Army Reserve Component Units

    DTIC Science & Technology

    2013-01-01

    or recognized, RC units could have more time because they may appear late in the force flow , particularly if AC units go earlier or the flow is...8, or deployment minus eight months), many new arrivals flowed into the unit, including many who would eventually deploy with the unit. Almost all...mobilization. Thus, those assigned are treated as 100 percent. To the right, we display the percentage (out of those assigned) who flowed into various

  11. Microfluidic routing of aqueous and organic flows at high pressures: fabrication and characterization of integrated polymer microvalve elements.

    PubMed

    Kirby, Brian J; Reichmuth, David S; Renzi, Ronald F; Shepodd, Timothy J; Wiedenman, Boyd J

    2005-02-01

    This paper presents the first systematic engineering study of the impact of chemical formulation and surface functionalization on the performace of free-standing microfluidic polymer elements used for high-pressure fluid control in glass microsystems. System design, chemical wet-etch processes, and laser-induced polymerization techniques are described, and parametric studies illustrate the effects of polymer formulation, glass surface modification, and geometric constraints on system performance parameters. In particular, this study shows that highly crosslinked and fluorinated polymers can overcome deficiencies in previously-reported microvalve architectures, particularly limited solvent compatibility. Substrate surface modification is shown effective in reducing the friction of the polymer-glass interface and thereby facilitating valve actuation. A microchip one-way valve constructed using this architecture shows a 2 x 10(8) ratio of forward and backward flow rates at 7 MPa. This valve architecture is integrated on chip with minimal dead volumes (70 pl), and should be applicable to systems (including chromatography and chemical synthesis devices) requiring high pressures and solvents of varying polarity.

  12. Designing area optimized application-specific network-on-chip architectures while providing hard QoS guarantees.

    PubMed

    Khawaja, Sajid Gul; Mushtaq, Mian Hamza; Khan, Shoab A; Akram, M Usman; Jamal, Habib Ullah

    2015-01-01

    With the increase of transistors' density, popularity of System on Chip (SoC) has increased exponentially. As a communication module for SoC, Network on Chip (NoC) framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS) guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC) topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows.

  13. Designing Area Optimized Application-Specific Network-On-Chip Architectures while Providing Hard QoS Guarantees

    PubMed Central

    Khawaja, Sajid Gul; Mushtaq, Mian Hamza; Khan, Shoab A.; Akram, M. Usman; Jamal, Habib ullah

    2015-01-01

    With the increase of transistors' density, popularity of System on Chip (SoC) has increased exponentially. As a communication module for SoC, Network on Chip (NoC) framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS) guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC) topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows. PMID:25898016

  14. Information Architecture for Quality Management Support in Hospitals.

    PubMed

    Rocha, Álvaro; Freixo, Jorge

    2015-10-01

    Quality Management occupies a strategic role in organizations, and the adoption of computer tools within an aligned information architecture facilitates the challenge of making more with less, promoting the development of a competitive edge and sustainability. A formal Information Architecture (IA) lends organizations an enhanced knowledge but, above all, favours management. This simplifies the reinvention of processes, the reformulation of procedures, bridging and the cooperation amongst the multiple actors of an organization. In the present investigation work we planned the IA for the Quality Management System (QMS) of a Hospital, which allowed us to develop and implement the QUALITUS (QUALITUS, name of the computer application developed to support Quality Management in a Hospital Unit) computer application. This solution translated itself in significant gains for the Hospital Unit under study, accelerating the quality management process and reducing the tasks, the number of documents, the information to be filled in and information errors, amongst others.

  15. Three-dimensional facies architecture of the Salem Limestone (middle Mississippian), Eastern Margin of Illinois basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadeem, A.; Keith, B.D.; Thompson, T.A.

    Mapping of sedimentary surfaces in the Middle Mississippian Salem Limestone exposed on sawed quarry walls in south-central Indiana has revealed a hierarchy of depositional units representative of the extremely dynamic hydrographic regime of the upper shoreface zone. The depositional units on the scale of microform and mesoform are represented by the microfacies and the facies respectively. Based on their hierarchy, genetically related depositional units and associated bounding surfaces were grouped together to construct four architectural packages (APs) of the scale of mesoforms. AP-I is dominantly an echinoderm- and bryozoan-rich grainstone and consists of bedforms ranging from small ripples bounded bymore » first-order surfaces to two- and three- dimensional megaripples bounded by the second-order surfaces. It formed as part of a giant ramp (asymmetric wavefield) within the intrashoal channel setting. AP-II, also a skeletal grainstone, is a complex of giant sandwaves that moved into the area under the infulence of a storm and partly filled the basal channel form of AP-I. Large avalanche foresets with tangential toesets prevail. AP-III is a dark-gray spatially discontinuous skeletal grainstone to packstone that laterally grades into a skeletal packstone to wackestone. It locally developed overhangs, rips-ups, and hardground on its upper surface. AP-IV is a skeletal and oolitic grainstone formed of tabular two-dimensional megaripples (planar cross-beds) and three-dimensional oscillatory megaripples (trough cross-beds). These architectural packages based on the bedform architecture and micro-and mesoscale compositional changes can be used to characterize micro-, meso, and macroscale heterogeneities. Models of facies architecture from this and similar outcrop studies can be applied to the subsurface Salem reservoirs in the Illinois Basin using cores.« less

  16. Vectorization of a particle code used in the simulation of rarefied hypersonic flow

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1990-01-01

    A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.

  17. A Scalable Architecture for Improving the Timeliness and Relevance of Cyber Incident Notifications

    DTIC Science & Technology

    2011-04-01

    the flow of communications is reasonably straight forward, but information often flows at the speed of human receipt and processing. Alberts & Hayes...these flows to the missions and people consuming them. Camus [30] can perform this through comparing logs to Lightweight Directory Access Protocol...publishing.af.mil/shared/media/epubs/AFI33- 138.pdf. [18] Alberts , D.S. and Hayes, R.E. (2003) “Power to the Edge: Command… Control… in the

  18. Reliability, Maintenance and Risk Assessment in Naval Architecture and Marine Engineering Education in the US.

    ERIC Educational Resources Information Center

    Inozu, Bahadir; Ayyub, Bilal A.

    1999-01-01

    Examines the current status of existing curricula, accreditation requirements, and new developments in Naval Architecture and Marine Engineering education in the United States. Discusses the emerging needs of the maritime industry in light of advances in information technology and movement toward risk-based, reliability-centered rule making in the…

  19. Intelligent Control and Health Monitoring. Chapter 3

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Kumar, Aditya; Mathews, H. Kirk; Rosenfeld, Taylor; Rybarik, Pavol; Viassolo, Daniel E.

    2009-01-01

    Advanced model-based control architecture overcomes the limitations state-of-the-art engine control and provides the potential of virtual sensors, for example for thrust and stall margin. "Tracking filters" are used to adapt the control parameters to actual conditions and to individual engines. For health monitoring standalone monitoring units will be used for on-board analysis to determine the general engine health and detect and isolate sudden faults. Adaptive models open up the possibility of adapting the control logic to maintain desired performance in the presence of engine degradation or to accommodate any faults. Improved and new sensors are required to allow sensing at stations within the engine gas path that are currently not instrumented due in part to the harsh conditions including high operating temperatures and to allow additional monitoring of vibration, mass flows and energy properties, exhaust gas composition, and gas path debris. The environmental and performance requirements for these sensors are summarized.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, Dheepak

    This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less

  1. Bridging a divide: architecture for a joint hospital-primary care data warehouse.

    PubMed

    An, Jeff; Keshavjee, Karim; Mirza, Kashif; Vassanji, Karim; Greiver, Michelle

    2015-01-01

    Healthcare costs are driven by a surprisingly small number of patients. Predicting who is likely to require care in the near future could help reduce costs by pre-empting use of expensive health care resources such as emergency departments and hospitals. We describe the design of an architecture for a joint hospital-primary care data warehouse (JDW) that can monitor the effectiveness of in-hospital interventions in reducing readmissions and predict which patients are most likely to be admitted to hospital in the near future. The design identifies the key governance elements, the architectural principles, the business case, the privacy architecture, future work flows, the IT infrastructure, the data analytics and the high level implementation plan for realization of the JDW. This architecture fills a gap in bridging data from two separate hospital and primary care organizations, not a single managed care entity with multiple locations. The JDW architecture design was well received by the stakeholders engaged and by senior leadership at the hospital and the primary care organization. Future plans include creating a demonstration system and conducting a pilot study.

  2. Development and Flight Testing of an Adaptive Vehicle Health-Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Taylor, B. Douglas; Brett, Rube R.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.

    2002-01-01

    On going development and testing of an adaptable vehicle health-monitoring architecture is presented. The architecture is being developed for a fleet of vehicles. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle, and, a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained expert system. The expert system is parameterized, which makes it adaptable to be trained to both a user's subject reasoning and existing quantitative analytic tools. Communication between all levels is done with wireless radio frequency interfaces. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear. The flight tests were performed to validate the following: the wireless radio frequency communication capabilities of the system, the hardware design, command and control; software operation and, data acquisition, storage and retrieval.

  3. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  4. GPU and APU computations of Finite Time Lyapunov Exponent fields

    NASA Astrophysics Data System (ADS)

    Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros

    2012-03-01

    We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.

  5. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  6. Repartitioning Strategies for Massively Parallel Simulation of Reacting Flow

    NASA Astrophysics Data System (ADS)

    Pisciuneri, Patrick; Zheng, Angen; Givi, Peyman; Labrinidis, Alexandros; Chrysanthis, Panos

    2015-11-01

    The majority of parallel CFD simulators partition the domain into equal regions and assign the calculations for a particular region to a unique processor. This type of domain decomposition is vital to the efficiency of the solver. However, as the simulation develops, the workload among the partitions often become uneven (e.g. by adaptive mesh refinement, or chemically reacting regions) and a new partition should be considered. The process of repartitioning adjusts the current partition to evenly distribute the load again. We compare two repartitioning tools: Zoltan, an architecture-agnostic graph repartitioner developed at the Sandia National Laboratories; and Paragon, an architecture-aware graph repartitioner developed at the University of Pittsburgh. The comparative assessment is conducted via simulation of the Taylor-Green vortex flow with chemical reaction.

  7. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  8. CLARA: CLAS12 Reconstruction and Analysis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyurjyan, Vardan; Matta, Sebastian Mancilla; Oyarzun, Ricardo

    2016-11-01

    In this paper we present SOA based CLAS12 event Reconstruction and Analyses (CLARA) framework. CLARA design focus is on two main traits: real-time data stream processing, and service-oriented architecture (SOA) in a flow based programming (FBP) paradigm. Data driven and data centric architecture of CLARA presents an environment for developing agile, elastic, multilingual data processing applications. The CLARA framework presents solutions capable of processing large volumes of data interactively and substantially faster than batch systems.

  9. Leaf-architectured 3D Hierarchical Artificial Photosynthetic System of Perovskite Titanates Towards CO2 Photoreduction Into Hydrocarbon Fuels

    PubMed Central

    Zhou, Han; Guo, Jianjun; Li, Peng; Fan, Tongxiang; Zhang, Di; Ye, Jinhua

    2013-01-01

    The development of an “artificial photosynthetic system” (APS) having both the analogous important structural elements and reaction features of photosynthesis to achieve solar-driven water splitting and CO2 reduction is highly challenging. Here, we demonstrate a design strategy for a promising 3D APS architecture as an efficient mass flow/light harvesting network relying on the morphological replacement of a concept prototype-leaf's 3D architecture into perovskite titanates for CO2 photoreduction into hydrocarbon fuels (CO and CH4). The process uses artificial sunlight as the energy source, water as an electron donor and CO2 as the carbon source, mimicking what real leaves do. To our knowledge this is the first example utilizing biological systems as “architecture-directing agents” for APS towards CO2 photoreduction, which hints at a more general principle for APS architectures with a great variety of optimized biological geometries. This research would have great significance for the potential realization of global carbon neutral cycle. PMID:23588925

  10. Lithological architecture and petrography of the Mako Birimian greenstone belt, Kédougou-Kéniéba Inlier, eastern Senegal

    NASA Astrophysics Data System (ADS)

    Dabo, Moussa; Aïfa, Tahar; Gning, Ibrahima; Faye, Malick; Ba, Mamadou Fallou; Ngom, Papa Malick

    2017-07-01

    The new lithological and petrographic data obtained in the Mako sector are analyzed in the light of the geochemical data available in the literature. It consists of ultramaic, mafic rocks of tholeiitic affinities associated with intermediate and felsic rocks of calc-alkaline affinities and with intercalations of sedimentary rocks. The whole unit is intruded by Eburnean granitoids and affected by a greenschist to amphibolite facies metamorphism related to a high grade hydrothermalism. It consists of: (i) ultramafic rocks composed of a fractional crystallization succession of lherzolites, wehrlites and pyroxenites with mafic rock inclusions; (ii) layered, isotropic and pegmatitic metagabbros which gradually pass to metabasalts occur at the top; (iii) massive and in pillow metabasalts with locally tapered vesicles, completely or partially filled with quartzo-feldspathic minerals; (iv) quarzites locally overlying the mafic rocks and thus forming the top of the lower unit. This ultramafic-mafic lower unit presents a tholeiitic affinity near to the OIB or N-MORB. It represents the Mako Ophiolitic Complex (MOC), a lithospheric fragment of Birimian lithospheric crust. The upper unit is a mixed volcanic complex arranged in the tectonic corridors. From bottom to top it comprises the following: (i) andesitic, and (ii) rhyodacitic and rhyolitic lava flows and tuffs, respectively. They present a calc-alkaline affinity of the active margins. Three generations of Eburnean granitoids are recognized: (i) early (2215-2160 Ma); (ii) syn-tectonics (2150-2100 Ma) and post-tectonics (2090-2040 Ma). The lithological succession, geochemical and metamorphic characteristics of these units point to an ophiolitic supra-subduction zone.

  11. The elements of a comprehensive education for future architectural acousticians

    NASA Astrophysics Data System (ADS)

    Wang, Lily M.

    2005-04-01

    Curricula for students who seek to become consultants of architectural acoustics or researchers in the field are few in the United States and in the world. This paper will present the author's opinions on the principal skills a student should obtain from a focused course of study in architectural acoustics. These include: (a) a solid command of math and wave theory, (b) fluency with digital signal processing techniques and sound measurement equipment, (c) expertise in using architectural acoustic software with an understanding of its limitations, (d) knowledge of building mechanical systems, (e) an understanding of human psychoacoustics, and (f) an appreciation for the artistic aspects of the discipline. Additionally, writing and presentation skills should be emphasized and participation in professional societies encouraged. Armed with such abilities, future architectural acousticians will advance the field significantly.

  12. Post-explant visualization of thrombi in outflow grafts and their junction to a continuous-flow total artificial heart using a high-definition miniaturized camera.

    PubMed

    Karimov, Jamshid H; Horvath, David; Sunagawa, Gengo; Byram, Nicole; Moazami, Nader; Golding, Leonard A R; Fukamachi, Kiyotaka

    2015-12-01

    Post-explant evaluation of the continuous-flow total artificial heart in preclinical studies can be extremely challenging because of the device's unique architecture. Determining the exact location of tissue regeneration, neointima formation, and thrombus is particularly important. In this report, we describe our first successful experience with visualizing the Cleveland Clinic continuous-flow total artificial heart using a custom-made high-definition miniature camera.

  13. Extensible packet processing architecture

    DOEpatents

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  14. Decentralized and Modular Electrical Architecture

    NASA Astrophysics Data System (ADS)

    Elisabelar, Christian; Lebaratoux, Laurence

    2014-08-01

    This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.

  15. Isolation and identification of oligomers from partial degradation of lime fruit cutin.

    PubMed

    Tian, Shiying; Fang, Xiuhua; Wang, Weimin; Yu, Bingwu; Cheng, Xiaofang; Qiu, Feng; Mort, Andrew J; Stark, Ruth E

    2008-11-12

    Complementary degradative treatments with low-temperature hydrofluoric acid and methanolic potassium hydroxide have been used to investigate the protective biopolymer cutin from Citrus aurantifolia (lime) fruits, augmenting prior enzymatic and chemical strategies to yield a more comprehensive view of its molecular architecture. Analysis of the resulting soluble oligomeric fragments with one- and two-dimensional NMR and MS methods identified a new dimer and three trimeric esters of primary alcohols based on 10,16-dihydroxyhexadecanoic acid and 10-oxo-16-hydroxyhexadecanoic acid units. Whereas only 10-oxo-16-hydroxyhexadecanoic acid units were found in the oligomers from hydrofluoric acid treatments, the dimer and trimer products isolated to date using diverse degradative methods included six of the seven possible stoichiometric ratios of monomer units. A novel glucoside-linked hydroxyfatty acid tetramer was also identified provisionally, suggesting that the cutin biopolymer can be bound covalently to the plant cell wall. Although the current findings suggest that the predominant molecular architecture of this protective polymer in lime fruits involves esters of primary and secondary alcohols based on long-chain hydroxyfatty acids, the possibility of additional cross-linking to enhance structural integrity is underscored by these and related findings of nonstandard cutin molecular architectures.

  16. All-memristive neuromorphic computing with level-tuned neurons

    NASA Astrophysics Data System (ADS)

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-01

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  17. All-memristive neuromorphic computing with level-tuned neurons.

    PubMed

    Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos

    2016-09-02

    In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.

  18. Holocene debris flows on the Colorado Plateau: The influence of clay mineralogy and chemistry

    USGS Publications Warehouse

    Webb, R.H.; Griffiths, P.G.; Rudd, L.P.

    2008-01-01

    Holocene debris flows do not occur uniformly on the Colorado Plateau province of North America. Debris flows occur in specific areas of the plateau, resulting in general from the combination of steep topography, intense convective precipitation, abundant poorly sorted material not stabilized by vegetation, and the exposure of certain fine-grained bedrock units in cliffs or in colluvium beneath those cliffs. In Grand and Cataract Canyons, fine-grained bedrock that produces debris flows contains primarily single-layer clays - notably illite and kaolinite - and has low multilayer clay content. This clay-mineral suite also occurs in the colluvium that produces debris flows as well as in debris-flow deposits, although unconsolidated deposits have less illite than the source bedrock. We investigate the relation between the clay mineralogy and major-cation chemistry of fine-grained bedrock units and the occurrence of debris flows on the entire Colorado Plateau. We determined that 85 mapped fine-grained bedrock units potentially could produce debris flows, and we analyzed clay mineralogy and major-cation concentration of 52 of the most widely distributed units, particularly those exposed in steep topography. Fine-grained bedrock units that produce debris flows contained an average of 71% kaolinite and illite and 5% montmorillonite and have a higher concentration of potassium and magnesium than nonproducing units, which have an average of 51% montmorillonite and a higher concentration of sodium. We used multivariate statistics to discriminate fine-grained bedrock units with the potential to produce debris flows, and we used digital-elevation models and mapped distribution of debris-flow producing units to derive a map that predicts potential occurrence of Holocene debris flows on the Colorado Plateau. ?? 2008 Geological Society of America.

  19. 50. Photocopy of Architectural drawing, dated August 6, 1976 by ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    50. Photocopy of Architectural drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21" Space Command. A-5 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - FOURTH FLOOR AND PLATFORM 4A. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  20. 52. Photocopy of Architectural drawing, dated August 6, 1976 by ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    52. Photocopy of Architectural drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21' Space Command. A-10 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - ELEVATION A, B AND C. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  1. 51. Photocopy of Architectural drawing, dated August 6, 1976 by ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    51. Photocopy of Architectural drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21" Space Command. A-6 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - FIFTH FLOOR AND PLATFORM 5A. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  2. 18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21' Space Command AL-2 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - SITE PLAN. DRAWING NO. AL-2 - SHEET 3 OF 21. - Cape Cod Air Station, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  3. 49. Photocopy of Architectural drawing, dated August 6, 1976 by ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    49. Photocopy of Architectural drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21" Space Command. A-4 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - THIRD FLOOR AND PLATFORM 3A. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  4. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1989 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Interior renovation, Navy recruiting orientation unit, Building No. 45, Architectural second floor plan, Sheet 4 of 29 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  5. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1989 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Interior renovation, Navy recruiting orientation unit, Building No. 45, Architectural third floor plan, Sheet 5 of 29 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  6. I. M. Pei's East Building: Solving Problems of Form and Function. Teacher's Guide. School Arts: Looking/Learning.

    ERIC Educational Resources Information Center

    Hinish, Heidi

    Ieoh Ming (I. M.) Pei, born in Canton, China, came to the United States in 1935 to study architecture, first at the University of Pennsylvania, then at the Massachusetts Institute of Technology, and at Harvard University's Graduate School of Design. Today, Pei's reputation and architectural contributions are renowned worldwide. He has designed…

  7. Deep learning based classification of breast tumors with shear-wave elastography.

    PubMed

    Zhang, Qi; Xiao, Yang; Dai, Wei; Suo, Jingfeng; Wang, Congzhi; Shi, Jun; Zheng, Hairong

    2016-12-01

    This study aims to build a deep learning (DL) architecture for automated extraction of learned-from-data image features from the shear-wave elastography (SWE), and to evaluate the DL architecture in differentiation between benign and malignant breast tumors. We construct a two-layer DL architecture for SWE feature extraction, comprised of the point-wise gated Boltzmann machine (PGBM) and the restricted Boltzmann machine (RBM). The PGBM contains task-relevant and task-irrelevant hidden units, and the task-relevant units are connected to the RBM. Experimental evaluation was performed with five-fold cross validation on a set of 227 SWE images, 135 of benign tumors and 92 of malignant tumors, from 121 patients. The features learned with our DL architecture were compared with the statistical features quantifying image intensity and texture. Results showed that the DL features achieved better classification performance with an accuracy of 93.4%, a sensitivity of 88.6%, a specificity of 97.1%, and an area under the receiver operating characteristic curve of 0.947. The DL-based method integrates feature learning with feature selection on SWE. It may be potentially used in clinical computer-aided diagnosis of breast cancer. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. High Performance GPU-Based Fourier Volume Rendering.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.

  9. The ATLAS Event Service: A new approach to event processing

    NASA Astrophysics Data System (ADS)

    Calafiura, P.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.

    2015-12-01

    The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  10. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    PubMed Central

    Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo

    2010-01-01

    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283

  11. Unraveling the hydrodynamics of split root water uptake experiments using CT scanned root architectures and three dimensional flow simulations

    PubMed Central

    Koebernick, Nicolai; Huber, Katrin; Kerkhofs, Elien; Vanderborght, Jan; Javaux, Mathieu; Vereecken, Harry; Vetterlein, Doris

    2015-01-01

    Split root experiments have the potential to disentangle water transport in roots and soil, enabling the investigation of the water uptake pattern of a root system. Interpretation of the experimental data assumes that water flow between the split soil compartments does not occur. Another approach to investigate root water uptake is by numerical simulations combining soil and root water flow depending on the parameterization and description of the root system. Our aim is to demonstrate the synergisms that emerge from combining split root experiments with simulations. We show how growing root architectures derived from temporally repeated X-ray CT scanning can be implemented in numerical soil-plant models. Faba beans were grown with and without split layers and exposed to a single drought period during which plant and soil water status were measured. Root architectures were reconstructed from CT scans and used in the model R-SWMS (root-soil water movement and solute transport) to simulate water potentials in soil and roots in 3D as well as water uptake by growing roots in different depths. CT scans revealed that root development was considerably lower with split layers compared to without. This coincided with a reduction of transpiration, stomatal conductance and shoot growth. Simulated predawn water potentials were lower in the presence of split layers. Simulations showed that this was related to an increased resistance to vertical water flow in the soil by the split layers. Comparison between measured and simulated soil water potentials proved that the split layers were not perfectly isolating and that redistribution of water from the lower, wetter compartments to the drier upper compartments took place, thus water losses were not equal to the root water uptake from those compartments. Still, the layers increased the resistance to vertical flow which resulted in lower simulated collar water potentials that led to reduced stomatal conductance and growth. PMID:26074935

  12. Megafans-Some New Perspectives from a Global Study

    NASA Technical Reports Server (NTRS)

    Wilkinson, M. Justin

    2016-01-01

    A global study of megafans (greater than 100 km long) has revealed their widespread existence on all continents, with almost 200 documented, 93 in Africa where research is most thorough. The largest measures 705 km. Megafans are a major subset of "DFS" (distributive fluvial systems, a category that includes all fan-like features greater than 30 km long). 1. Many researchers now recognize megafans as different from floodplains, small coarse-grained alluvial fans, and deltas. Although smaller architectural elements in megafans are the same as those encountered in floodplains (channel, overbank, etc.), larger architectures differ because of the unconfined setting of megafans, versus the valley-confined setting of floodplains. 2. A length continuum is now documented between steep alluvial fans 10-20 km in length, and fluvial fans 30-50 km long. This implies a continuum of process from end-member alluvial fan processes (e.g. high-energy flows that emplace gravels, debris-flow units) to the relatively fine-grained channel and overbank deposits common to purely fluvial fans. Combinations of these different processes will then occur in many mid-sized fans. 3. The global distribution suggests a prima facie relationship with tectonic environment rather than climatic zones, with local controls being the slope of the formative river and the existence of a basin subsiding below the long profile of the river. But the global population has revealed that most megafans are relict. So it is possible that further research will show relationships to prior climatic regimes. 4. Megafans can have regional importance: e.g., along the east flank of the central Andes, nested megafans total approximately 750,000 km2-and 1.2m km2 if all megafans in S. America are counted. Modern megafan landscapes thus have basinal importance, orders of magnitude greater than alluvial fan bajadas. 5. Because so many aggrading basins are dominated today by DFS, it is claimed that DFS ought to be significant in the subsurface; and that existing fluvial models therefore may not apply to the majority of fluvial sedimentary units. Arguments have been raised against this view, but as modern megafan systems become better known they are rapidly being applied as a model in many fluvial basins. A small literature has arisen with apparent examples from every part of the world.

  13. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  14. PHYTOREMEDIATION: INTEGRATING ART AND ENGINEERING THROUGH PLANTING

    EPA Science Inventory

    Landscape Architecture and Remediation Engineering are related fields, united by common areas of endeavor, yet they have strikingly different languages, techniques, and habits of thought. What unites the fields is the fact that they often work on the same site, with the common go...

  15. Maintenance of Occupational Control: The Case of Professions.

    ERIC Educational Resources Information Center

    Child, John; Fulk, Janet

    1982-01-01

    Contemporary conditions relevant to the maintenance of occupational control are examined for five professions (accounting, architecture, civil engineering, law, and medicine) in the United Kingdom and the United States as an impetus for the analysis of control by occupations in general. (Author/CT)

  16. Integrated Modular Avionics for Spacecraft: Earth Observation Use Case Demonstrator

    NASA Astrophysics Data System (ADS)

    Deredempt, Marie-Helene; Rossignol, Alain; Hyounet, Philippe

    2013-08-01

    Integrated Modular Avionics (IMA) for Space, as European Space Agency initiative, aimed to make applicable to space domain the time and space partitioning concepts and particularly the ARINC 653 standard [1][2]. Expected benefits of such an approach are development flexibility, capability to provide differential V&V for different criticality level functionalities and to integrate late or In-Orbit delivery. This development flexibility could improve software subcontracting, industrial organization and software reuse. Time and space partitioning technique facilitates integration of software functions as black boxes and integration of decentralized function such as star tracker in On Board Computer to save mass and power by limiting electronics resources. In aeronautical domain, Integrated Modular Avionics architecture is based on a network of LRU (Line Replaceable Unit) interconnected by AFDX (Avionic Full DupleX). Time and Space partitioning concept is applicable to LRU and provides independent partitions which inter communicate using ARINC 653 communication ports. Using End System (LRU component) intercommunication between LRU is managed in the same way than intercommunication between partitions in LRU. In such architecture an application developed using only communication port can be integrated in an LRU or another one without impacting the global architecture. In space domain, a redundant On Board Computer controls (ground monitoring TM) and manages the platform (ground command TC) in terms of power, solar array deployment, attitude, orbit, thermal, maintenance, failure detection and recovery isolation. In addition, Payload units and platform units such as RIU, PCDU, AOCS units (Star tracker, Reaction wheels) are considered in this architecture. Interfaces are mainly realized through MIL-STD-1553B busses and SpaceWire and this could be considered as the main constraint for IMA implementation in space domain. During the first phase of IMA SP project, ARINC653 impact was analyzed. Requirements and architecture for space domain were defined [3][4] and System Executive platforms (based on Xtratum, Pike OS, and AIR) were developed with RTEMS as Guest OS. This paper focuses on the demonstrator developed by Astrium as part of IMA SP project. This demonstrator has the objective to confirm operational software partitioning feasibility above Xtratum System Executive Platform with acceptable CPU overhead.

  17. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  18. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  19. Single-unit-cell layer established Bi 2 WO 6 3D hierarchical architectures: Efficient adsorption, photocatalysis and dye-sensitized photoelectrochemical performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongwei; Cao, Ranran; Yu, Shixin

    Single-layer catalysis sparks huge interests and gains widespread attention owing to its high activity. Simultaneously, three-dimensional (3D) hierarchical structure can afford large surface area and abundant reactive sites, contributing to high efficiency. Herein, we report an absorbing single-unit-cell layer established Bi2WO6 3D hierarchical architecture fabricated by a sodium dodecyl benzene sulfonate (SDBS)-assisted assembled strategy. The DBS- long chains can adsorb on the (Bi2O2)2+ layers and hence impede stacking of the layers, resulting in the single-unit-cell layer. We also uncovered that SDS with a shorter chain is less effective than SDBS. Due to the sufficient exposure of surface O atoms, single-unit-cellmore » layer 3D Bi2WO6 shows strong selectivity for adsorption on multiform organic dyes with different charges. Remarkably, the single-unit-cell layer 3D Bi2WO6 casts profoundly enhanced photodegradation activity and especially a superior photocatalytic H2 evolution rate, which is 14-fold increase in contrast to the bulk Bi2WO6. Systematic photoelectrochemical characterizations disclose that the substantially elevated carrier density and charge separation efficiency take responsibility for the strengthened photocatalytic performance. Additionally, the possibility of single-unit-cell layer 3D Bi2WO6 as dye-sensitized solar cells (DSSC) has also been attempted and it was manifested to be a promising dye-sensitized photoanode for oxygen evolution reaction (ORR). Our work not only furnish an insight into designing single-layer assembled 3D hierarchical architecture, but also offer a multi-functional material for environmental and energy applications.« less

  20. Evolutionary dynamics of protein domain architecture in plants

    PubMed Central

    2012-01-01

    Background Protein domains are the structural, functional and evolutionary units of the protein. Protein domain architectures are the linear arrangements of domain(s) in individual proteins. Although the evolutionary history of protein domain architecture has been extensively studied in microorganisms, the evolutionary dynamics of domain architecture in the plant kingdom remains largely undefined. To address this question, we analyzed the lineage-based protein domain architecture content in 14 completed green plant genomes. Results Our analyses show that all 14 plant genomes maintain similar distributions of species-specific, single-domain, and multi-domain architectures. Approximately 65% of plant domain architectures are universally present in all plant lineages, while the remaining architectures are lineage-specific. Clear examples are seen of both the loss and gain of specific protein architectures in higher plants. There has been a dynamic, lineage-wise expansion of domain architectures during plant evolution. The data suggest that this expansion can be largely explained by changes in nuclear ploidy resulting from rounds of whole genome duplications. Indeed, there has been a decrease in the number of unique domain architectures when the genomes were normalized into a presumed ancestral genome that has not undergone whole genome duplications. Conclusions Our data show the conservation of universal domain architectures in all available plant genomes, indicating the presence of an evolutionarily conserved, core set of protein components. However, the occurrence of lineage-specific domain architectures indicates that domain architecture diversity has been maintained beyond these core components in plant genomes. Although several features of genome-wide domain architecture content are conserved in plants, the data clearly demonstrate lineage-wise, progressive changes and expansions of individual protein domain architectures, reinforcing the notion that plant genomes have undergone dynamic evolution. PMID:22252370

  1. Wavy Architecture Thin-Film Transistor for Ultrahigh Resolution Flexible Displays.

    PubMed

    Hanna, Amir Nabil; Kutbee, Arwa Talal; Subedi, Ram Chandra; Ooi, Boon; Hussain, Muhammad Mustafa

    2018-01-01

    A novel wavy-shaped thin-film-transistor (TFT) architecture, capable of achieving 70% higher drive current per unit chip area when compared with planar conventional TFT architectures, is reported for flexible display application. The transistor, due to its atypical architecture, does not alter the turn-on voltage or the OFF current values, leading to higher performance without compromising static power consumption. The concept behind this architecture is expanding the transistor's width vertically through grooved trenches in a structural layer deposited on a flexible substrate. Operation of zinc oxide (ZnO)-based TFTs is shown down to a bending radius of 5 mm with no degradation in the electrical performance or cracks in the gate stack. Finally, flexible low-power LEDs driven by the respective currents of the novel wavy, and conventional coplanar architectures are demonstrated, where the novel architecture is able to drive the LED at 2 × the output power, 3 versus 1.5 mW, which demonstrates the potential use for ultrahigh resolution displays in an area efficient manner. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. The Double-System Architecture for Trusted OS

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Li, Yu; Zhan, Jing

    With the development of computer science and technology, current secure operating systems failed to respond to many new security challenges. Trusted operating system (TOS) is proposed to try to solve these problems. However, there are no mature, unified architectures for the TOS yet, since most of them cannot make clear of the relationship between security mechanism and the trusted mechanism. Therefore, this paper proposes a double-system architecture (DSA) for the TOS to solve the problem. The DSA is composed of the Trusted System (TS) and the Security System (SS). We constructed the TS by establishing a trusted environment and realized related SS. Furthermore, we proposed the Trusted Information Channel (TIC) to protect the information flow between TS and SS. In a word, the double system architecture we proposed can provide reliable protection for the OS through the SS with the supports provided by the TS.

  3. Lateral flow strip assay

    DOEpatents

    Miles, Robin R [Danville, CA; Benett, William J [Livermore, CA; Coleman, Matthew A [Oakland, CA; Pearson, Francesca S [Livermore, CA; Nasarabadi, Shanavaz L [Livermore, CA

    2011-03-08

    A lateral flow strip assay apparatus comprising a housing; a lateral flow strip in the housing, the lateral flow strip having a receiving portion; a sample collection unit; and a reagent reservoir. Saliva and/or buccal cells are collected from an individual using the sample collection unit. The sample collection unit is immersed in the reagent reservoir. The tip of the lateral flow strip is immersed in the reservoir and the reagent/sample mixture wicks up into the lateral flow strip to perform the assay.

  4. Design and Fabrication of High-Efficiency CMOS/CCD Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2007-01-01

    An architecture for back-illuminated complementary metal oxide/semiconductor (CMOS) and charge-coupled-device (CCD) ultraviolet/visible/near infrared- light image sensors, and a method of fabrication to implement the architecture, are undergoing development. The architecture and method are expected to enable realization of the full potential of back-illuminated CMOS/CCD imagers to perform with high efficiency, high sensitivity, excellent angular response, and in-pixel signal processing. The architecture and method are compatible with next-generation CMOS dielectric-forming and metallization techniques, and the process flow of the method is compatible with process flows typical of the manufacture of very-large-scale integrated (VLSI) circuits. The architecture and method overcome all obstacles that have hitherto prevented high-yield, low-cost fabrication of back-illuminated CMOS/CCD imagers by use of standard VLSI fabrication tools and techniques. It is not possible to discuss the obstacles in detail within the space available for this article. Briefly, the obstacles are posed by the problems of generating light-absorbing layers having desired uniform and accurate thicknesses, passivation of surfaces, forming structures for efficient collection of charge carriers, and wafer-scale thinning (in contradistinction to diescale thinning). A basic element of the present architecture and method - the element that, more than any other, makes it possible to overcome the obstacles - is the use of an alternative starting material: Instead of starting with a conventional bulk-CMOS wafer that consists of a p-doped epitaxial silicon layer grown on a heavily-p-doped silicon substrate, one starts with a special silicon-on-insulator (SOI) wafer that consists of a thermal oxide buried between a lightly p- or n-doped, thick silicon layer and a device silicon layer of appropriate thickness and doping. The thick silicon layer is used as a handle: that is, as a mechanical support for the device silicon layer during micro-fabrication.

  5. Genetic architecture and genomic patterns of gene flow between hybridizing species of Picea

    PubMed Central

    De La Torre, A; Ingvarsson, P K; Aitken, S N

    2015-01-01

    Hybrid zones provide an opportunity to study the effects of selection and gene flow in natural settings. We employed nuclear microsatellites (single sequence repeat (SSR)) and candidate gene single-nucleotide polymorphism markers (SNPs) to characterize the genetic architecture and patterns of interspecific gene flow in the Picea glauca × P. engelmannii hybrid zone across a broad latitudinal (40–60 degrees) and elevational (350–3500 m) range in western North America. Our results revealed a wide and complex hybrid zone with broad ancestry levels and low interspecific heterozygosity, shaped by asymmetric advanced-generation introgression, and low reproductive barriers between parental species. The clinal variation based on geographic variables, lack of concordance in clines among loci and the width of the hybrid zone points towards the maintenance of species integrity through environmental selection. Congruency between geographic and genomic clines suggests that loci with narrow clines are under strong selection, favoring either one parental species (directional selection) or their hybrids (overdominance) as a result of strong associations with climatic variables such as precipitation as snow and mean annual temperature. Cline movement due to past demographic events (evidenced by allelic richness and heterozygosity shifts from the average cline center) may explain the asymmetry in introgression and predominance of P. engelmannii found in this study. These results provide insights into the genetic architecture and fine-scale patterns of admixture, and identify loci that may be involved in reproductive barriers between the species. PMID:25806545

  6. Specification of an integrated information architecture for a mobile teleoperated robot for home telecare.

    PubMed

    Iannuzzi, David; Grant, Andrew; Corriveau, Hélène; Boissy, Patrick; Michaud, Francois

    2016-12-01

    The objective of this study was to design effectively integrated information architecture for a mobile teleoperated robot in remote assistance to the delivery of home health care. Three role classes were identified related to the deployment of a telerobot, namely, engineer, technology integrator, and health professional. Patients and natural caregivers were indirectly considered, this being a component of future field studies. Interviewing representatives of each class provided the functions, and information content and flows for each function. Interview transcripts enabled the formulation of UML (Universal Modeling Language) diagrams for feedback from participants. The proposed information architecture was validated with a use-case scenario. The integrated information architecture incorporates progressive design, ergonomic integration, and the home care needs from medical specialist, nursing, physiotherapy, occupational therapy, and social worker care perspectives. The integrated architecture iterative process promoted insight among participants. The use-case scenario evaluation showed the design's robustness. Complex innovation such as a telerobot must coherently mesh with health-care service delivery needs. The deployment of integrated information architecture bridging development, with specialist and home care applications, is necessary for home care technology innovation. It enables continuing evolution of robot and novel health information design in the same integrated architecture, while accounting for patient ecological need.

  7. Architectural design of heterogeneous metallic nanocrystals--principles and processes.

    PubMed

    Yu, Yue; Zhang, Qingbo; Yao, Qiaofeng; Xie, Jianping; Lee, Jim Yang

    2014-12-16

    CONSPECTUS: Heterogeneous metal nanocrystals (HMNCs) are a natural extension of simple metal nanocrystals (NCs), but as a research topic, they have been much less explored until recently. HMNCs are formed by integrating metal NCs of different compositions into a common entity, similar to the way atoms are bonded to form molecules. HMNCs can be built to exhibit an unprecedented architectural diversity and complexity by programming the arrangement of the NC building blocks ("unit NCs"). The architectural engineering of HMNCs involves the design and fabrication of the architecture-determining elements (ADEs), i.e., unit NCs with precise control of shape and size, and their relative positions in the design. Similar to molecular engineering, where structural diversity is used to create more property variations for application explorations, the architectural engineering of HMNCs can similarly increase the utility of metal NCs by offering a suite of properties to support multifunctionality in applications. The architectural engineering of HMNCs calls for processes and operations that can execute the design. Some enabling technologies already exist in the form of classical micro- and macroscale fabrication techniques, such as masking and etching. These processes, when used singly or in combination, are fully capable of fabricating nanoscopic objects. What is needed is a detailed understanding of the engineering control of ADEs and the translation of these principles into actual processes. For simplicity of execution, these processes should be integrated into a common reaction system and yet retain independence of control. The key to architectural diversity is therefore the independent controllability of each ADE in the design blueprint. The right chemical tools must be applied under the right circumstances in order to achieve the desired outcome. In this Account, after a short illustration of the infinite possibility of combining different ADEs to create HMNC design variations, we introduce the fabrication processes for each ADE, which enable shape, size, and location control of the unit NCs in a particular HMNC design. The principles of these processes are discussed and illustrated with examples. We then discuss how these processes may be integrated into a common reaction system while retaining the independence of individual processes. The principles for the independent control of each ADE are discussed in detail to lay the foundation for the selection of the chemical reaction system and its operating space.

  8. A security architecture for interconnecting health information systems.

    PubMed

    Gritzalis, Dimitris; Lambrinoudakis, Costas

    2004-03-31

    Several hereditary and other chronic diseases necessitate continuous and complicated health care procedures, typically offered in different, often distant, health care units. Inevitably, the medical records of patients suffering from such diseases become complex, grow in size very fast and are scattered all over the units involved in the care process, hindering communication of information between health care professionals. Web-based electronic medical records have been recently proposed as the solution to the above problem, facilitating the interconnection of the health care units in the sense that health care professionals can now access the complete medical record of the patient, even if it is distributed in several remote units. However, by allowing users to access information from virtually anywhere, the universe of ineligible people who may attempt to harm the system is dramatically expanded, thus severely complicating the design and implementation of a secure environment. This paper presents a security architecture that has been mainly designed for providing authentication and authorization services in web-based distributed systems. The architecture has been based on a role-based access scheme and on the implementation of an intelligent security agent per site (i.e. health care unit). This intelligent security agent: (a). authenticates the users, local or remote, that can access the local resources; (b). assigns, through temporary certificates, access privileges to the authenticated users in accordance to their role; and (c). communicates to other sites (through the respective security agents) information about the local users that may need to access information stored in other sites, as well as about local resources that can be accessed remotely.

  9. State-of-the-art in Heterogeneous Computing

    DOE PAGES

    Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...

    2010-01-01

    Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less

  10. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE PAGES

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    2017-08-23

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  11. A method to generate small-scale, high-resolution sedimentary bedform architecture models representing realistic geologic facies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meckel, T. A.; Trevisan, L.; Krishnamurthy, P. G.

    Small-scale (mm to m) sedimentary structures (e.g. ripple lamination, cross-bedding) have received a great deal of attention in sedimentary geology. The influence of depositional heterogeneity on subsurface fluid flow is now widely recognized, but incorporating these features in physically-rational bedform models at various scales remains problematic. The current investigation expands the capability of an existing set of open-source codes, allowing generation of high-resolution 3D bedform architecture models. The implemented modifications enable the generation of 3D digital models consisting of laminae and matrix (binary field) with characteristic depositional architecture. The binary model is then populated with petrophysical properties using a texturalmore » approach for additional analysis such as statistical characterization, property upscaling, and single and multiphase fluid flow simulation. One example binary model with corresponding threshold capillary pressure field and the scripts used to generate them are provided, but the approach can be used to generate dozens of previously documented common facies models and a variety of property assignments. An application using the example model is presented simulating buoyant fluid (CO 2) migration and resulting saturation distribution.« less

  12. Finite element study of scaffold architecture design and culture conditions for tissue engineering.

    PubMed

    Olivares, Andy L; Marsal, Elia; Planell, Josep A; Lacroix, Damien

    2009-10-01

    Tissue engineering scaffolds provide temporary mechanical support for tissue regeneration and transfer global mechanical load to mechanical stimuli to cells through its architecture. In this study the interactions between scaffold pore morphology, mechanical stimuli developed at the cell microscopic level, and culture conditions applied at the macroscopic scale are studied on two regular scaffold structures. Gyroid and hexagonal scaffolds of 55% and 70% porosity were modeled in a finite element analysis and were submitted to an inlet fluid flow or compressive strain. A mechanoregulation theory based on scaffold shear strain and fluid shear stress was applied for determining the influence of each structures on the mechanical stimuli on initial conditions. Results indicate that the distribution of shear stress induced by fluid perfusion is very dependent on pore distribution within the scaffold. Gyroid architectures provide a better accessibility of the fluid than hexagonal structures. Based on the mechanoregulation theory, the differentiation process in these structures was more sensitive to inlet fluid flow than axial strain of the scaffold. This study provides a computational approach to determine the mechanical stimuli at the cellular level when cells are cultured in a bioreactor and to relate mechanical stimuli with cell differentiation.

  13. 48. Photocopy of Architectural Layout drawing, dated August 6, 1976 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    48. Photocopy of Architectural Layout drawing, dated August 6, 1976 by Raytheon Company. Original drawing property of United States Air Force, 21" Space Command. AL-2 - PAVE PAWS TECHNICAL FACILITY - OTIS AFB - EQUIPMENT LAYOUT - SECOND FLOOR AND PLATFORM 2A. DRAWING NO. AW35-46-06 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  14. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1989 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Interior renovation, Navy recruiting orientation unit, Building No. 45, Architectural floor plan and window details, Sheet 3 of 29 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  15. Photocopy of drawing (this photograph is an 8" x 10" ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of drawing (this photograph is an 8" x 10" copy of an 8" x 10" negative; 1989 original architectural drawing located at Building No. 458, NAS Pensacola, Florida) Interior renovation, Navy recruiting orientation unit, Building No. 45, Architectural floor plan and general notes, Sheet 2 of 29 - U.S. Naval Air Station, Equipment Shops & Offices, 206 South Avenue, Pensacola, Escambia County, FL

  16. Utopia, University and Architecture: A Journey that Changed the Design of Contemporary Universities

    ERIC Educational Resources Information Center

    Calvo-Sotelo, Pablo Campos

    2006-01-01

    In 1927, a group of advisors to King Alfonso XIII of Spain, led by the architect Modesto Lopez-Otero, set out for the United States and Canada. Previously, they had visited a number of European cities where they examined the medieval architectural form of some famous universities. Inspired by a Utopian vision, the journey to the New World studied…

  17. Alta Scuola Politecnica: An Ongoing Experiment in the Multidisciplinary Education of Top Students towards Innovation in Engineering, Architecture and Design

    ERIC Educational Resources Information Center

    Benedetto, S.; Bernelli Zazzera, F.; Bertola, P.; Cantamessa, M.; Ceri, S.; Ranci, C.; Spaziante, A.; Zanino, R.

    2010-01-01

    Politecnico di Milano and Politecnico di Torino, the top technical universities in Italy, united their efforts in 2004 by launching a unique excellence programme called Alta Scuola Politecnica (ASP). The ASP programme is devoted to 150 students, selected each year from among the top 5-10% of those enrolled in the Engineering, Architecture and…

  18. Power, Avionics and Software Communication Network Architecture

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This document describes the communication architecture for the Power, Avionics and Software (PAS) 2.0 subsystem for the Advanced Extravehicular Mobile Unit (AEMU). The following systems are described in detail: Caution Warn- ing and Control System, Informatics, Storage, Video, Audio, Communication, and Monitoring Test and Validation. This document also provides some background as well as the purpose and goals of the PAS project at Glenn Research Center (GRC).

  19. Tactical Unit Data and Decision Requirements for Urban Operations

    DTIC Science & Technology

    2008-10-01

    maneuverability, sensor optimization, weapon effects, terrain), and develop rich but lightweight information structures and architecture . Specifically, the goal...tion structure and systemic architecture that enable sharing common data in near real-time, and mission planning and analysis software for U-BE...Building Function Verify mosque and identify possi- ble schools or meeting places Identify types of cinema (stage/theatre) or other similar nearby

  20. Generic Software for Emulating Multiprocessor Architectures.

    DTIC Science & Technology

    1985-05-01

    RD-A157 662 GENERIC SOFTWARE FOR EMULATING MULTIPROCESSOR 1/2 AlRCHITECTURES(J) MASSACHUSETTS INST OF TECH CAMBRIDGE U LRS LAB FOR COMPUTER SCIENCE R...AREA & WORK UNIT NUMBERS MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 ____________ I I. CONTROLLING OFFICE NAME AND...aide If neceeasy end Identify by block number) Computer architecture, emulation, simulation, dataf low 20. ABSTRACT (Continue an reverse slde It

  1. Branch length mediates flower production and inflorescence architecture of Fouquieria splendens (ocotillo)

    USGS Publications Warehouse

    Bowers, Janice E.

    2006-01-01

    The capacity of individual branches to store water and fix carbon can have profound effects on inflorescence size and architecture, thus on floral display, pollination, and fecundity. Mixed regression was used to investigate the relation between branch length, a proxy for plant resources, and floral display of Fouquieria splendens (ocotillo), a woody, candelabraform shrub of wide distribution in arid North America. Long branches produced three times as many flowers as short branches, regardless of overall plant size. Long branches also had more complex panicles with more cymes and cyme types than short branches; thus, branch length also influenced inflorescence architecture. Within panicles, increasing the number of cymes by one unit added about two flowers, whereas increasing the number of cyme types by one unit added about 21 flowers. Because flower production is mediated by branch length, and because most plants have branches of various lengths, the floral display of individual plants necessarily encompasses a wide range of inflorescence size and structure. ?? Springer 2006.

  2. Genetic and environmental influences on the relationship between flow proneness, locus of control and behavioral inhibition.

    PubMed

    Mosing, Miriam A; Pedersen, Nancy L; Cesarini, David; Johannesson, Magnus; Magnusson, Patrik K E; Nakamura, Jeanne; Madison, Guy; Ullén, Fredrik

    2012-01-01

    Flow is a psychological state of high but subjectively effortless attention that typically occurs during active performance of challenging tasks and is accompanied by a sense of automaticity, high control, low self-awareness, and enjoyment. Flow proneness is associated with traits and behaviors related to low neuroticism such as emotional stability, conscientiousness, active coping, self-esteem and life satisfaction. Little is known about the genetic architecture of flow proneness, behavioral inhibition and locus of control--traits also associated with neuroticism--and their interrelation. Here, we hypothesized that individuals low in behavioral inhibition and with an internal locus of control would be more likely to experience flow and explored the genetic and environmental architecture of the relationship between the three variables. Behavioral inhibition and locus of control was measured in a large population sample of 3,375 full twin pairs and 4,527 single twins, about 26% of whom also scored the flow proneness questionnaire. Findings revealed significant but relatively low correlations between the three traits and moderate heritability estimates of .41, .45, and .30 for flow proneness, behavioral inhibition, and locus of control, respectively, with some indication of non-additive genetic influences. For behavioral inhibition we found significant sex differences in heritability, with females showing a higher estimate including significant non-additive genetic influences, while in males the entire heritability was due to additive genetic variance. We also found a mainly genetically mediated relationship between the three traits, suggesting that individuals who are genetically predisposed to experience flow, show less behavioral inhibition (less anxious) and feel that they are in control of their own destiny (internal locus of control). We discuss that some of the genes underlying this relationship may include those influencing the function of dopaminergic neural systems.

  3. Minimum flow unit installation at the South Edwards Hydro Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhardt, P.; Bates, D.

    1995-12-31

    Niagara Mohawk Power Corp. owns and operates the 3.3 MW South Edwards Hydro Plant in Northern New York. The FERC license for this plant requires a minimum flow release in the bypass region of the river. NMPC submitted a license amendment to the FERC to permit the addition of a minimum flow unit to take advantage of this flow. The amendment was accepted, permitting the installation of the 236 kw, 60 cfs unit to proceed. The unit was installed and commissioned in 1994.

  4. An Application of Data Mining Techniques for Flood Forecasting: Application in Rivers Daya and Bhargavi, India

    NASA Astrophysics Data System (ADS)

    Panigrahi, Binay Kumar; Das, Soumya; Nath, Tushar Kumar; Senapati, Manas Ranjan

    2018-05-01

    In the present study, with a view to speculate the water flow of two rivers in eastern India namely river Daya and river Bhargavi, the focus was on developing Cascaded Functional Link Artificial Neural Network (C-FLANN) model. Parameters of C-FLANN architecture were updated using Harmony Search (HS) and Differential Evolution (DE). As the numbers of samples are very low, there is a risk of over fitting. To avoid this Map reduce based ANOVA technique is used to select important features. These features were used and provided to the architecture which is used to predict the water flow in both the rivers, one day, one week and two weeks ahead. The results of both the techniques were compared with Radial Basis Functional Neural Network (RBFNN) and Multilayer Perceptron (MLP), two widely used artificial neural network for prediction. From the result it was confirmed that C-FLANN trained through HS gives better prediction result than being trained through DE or RBFNN or MLP and can be used for predicting water flow in different rivers.

  5. Passive Cooling of Body Armor

    NASA Astrophysics Data System (ADS)

    Holtz, Ronald; Matic, Peter; Mott, David

    2013-03-01

    Warfighter performance can be adversely affected by heat load and weight of equipment. Current tactical vest designs are good insulators and lack ventilation, thus do not provide effective management of metabolic heat generated. NRL has undertaken a systematic study of tactical vest thermal management, leading to physics-based strategies that provide improved cooling without undesirable consequences such as added weight, added electrical power requirements, or compromised protection. The approach is based on evaporative cooling of sweat produced by the wearer of the vest, in an air flow provided by ambient wind or ambulatory motion of the wearer. Using an approach including thermodynamic analysis, computational fluid dynamics modeling, air flow measurements of model ventilated vest architectures, and studies of the influence of fabric aerodynamic drag characteristics, materials and geometry were identified that optimize passive cooling of tactical vests. Specific architectural features of the vest design allow for optimal ventilation patterns, and selection of fabrics for vest construction optimize evaporation rates while reducing air flow resistance. Cooling rates consistent with the theoretical and modeling predictions were verified experimentally for 3D mockups.

  6. Flow rate of transport network controls uniform metabolite supply to tissue

    PubMed Central

    Meigel, Felix J.

    2018-01-01

    Life and functioning of higher organisms depends on the continuous supply of metabolites to tissues and organs. What are the requirements on the transport network pervading a tissue to provide a uniform supply of nutrients, minerals or hormones? To theoretically answer this question, we present an analytical scaling argument and numerical simulations on how flow dynamics and network architecture control active spread and uniform supply of metabolites by studying the example of xylem vessels in plants. We identify the fluid inflow rate as the key factor for uniform supply. While at low inflow rates metabolites are already exhausted close to flow inlets, too high inflow flushes metabolites through the network and deprives tissue close to inlets of supply. In between these two regimes, there exists an optimal inflow rate that yields a uniform supply of metabolites. We determine this optimal inflow analytically in quantitative agreement with numerical results. Optimizing network architecture by reducing the supply variance over all network tubes, we identify patterns of tube dilation or contraction that compensate sub-optimal supply for the case of too low or too high inflow rate. PMID:29720455

  7. Modeling the evolution of protein domain architectures using maximum parsimony.

    PubMed

    Fong, Jessica H; Geer, Lewis Y; Panchenko, Anna R; Bryant, Stephen H

    2007-02-09

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture "neighbors" identified in this way may lead to new insights about the evolution of protein function.

  8. Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony

    PubMed Central

    Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.

    2007-01-01

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515

  9. Hierarchy of facies of pyroclastic flow deposits generated by Laacher See type eruptions

    NASA Astrophysics Data System (ADS)

    Freundt, A.; Schmincke, H.-U.

    1985-04-01

    The upper Quaternary pyroclastic flow deposits of Laacher See volcano show compositional and structural facies variations on four different scales: (1) eruptive units of pyroclastic flows, composed of many flow units; (2) depositional cycles of as many as five flow units; flow units containing (3) regional intraflow-unit facies; and (4) local intraflow-unit subfacies. These facies can be explained by successively overlapping processes beginning in the magma column and ending with final deposition. The pyroclastic flow deposits thus reflect major aspects of the eruptive history of Laacher See volcano: (a) drastic changes in eruptive mechanism due to increasing access of water to the magma chamber and (b) change in chemical composition and crystal and gas content as evacuation of a compositionally zoned magma column progressed. The four scales of facies result from four successive sets of processes: (1) differentiation in the magma column and external factors governing the mechanism of eruption; (2) temporal variations of factors inducing eruption column collapse; (3) physical conditions in the eruption column and the way in which its collapse proceeds; and (4) interplay of flow-inherent and morphology-induced transport mechanics.

  10. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hsien-Hsin S

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniquesmore » and system software for achieving a robust, secure, and reliable computing system toward our goal.« less

  12. Aspects of IVHS architecture design

    DOT National Transportation Integrated Search

    1991-09-17

    IVHS systems influence four kinds of decisions that drivers make during their trip. The : corresponding tasks that IVHS systems carry out are route and flow control, congestion : control, vehicle coordination, and spacing. A comparison of two scenari...

  13. Evaluation of Sidestream Darkfield Microscopy for Real-Time Imaging Acellular Dermal Matrix Revascularization.

    PubMed

    DeGeorge, Brent R; Olenczak, J Bryce; Cottler, Patrick S; Drake, David B; Lin, Kant Y; Morgan, Raymond F; Campbell, Christopher A

    2016-06-01

    Acellular dermal matrices (ADMs) serve as a regenerative framework for host cell integration and collagen deposition to augment the soft tissue envelope in ADM-assisted breast reconstruction-a process dependent on vascular ingrowth. To date noninvasive intra-operative imaging techniques have been inadequate to evaluate the revascularization of ADM. We investigated the safety, feasibility, and efficacy of sidestream darkfield (SDF) microscopy to assess the status of ADM microvascular architecture in 8 patients at the time of tissue expander to permanent implant exchange during 2-stage ADM-assisted breast reconstruction. The SDF microscopy is a handheld device, which can be used intraoperatively for the real-time assessment of ADM blood flow, vessel density, vessel size, and branching pattern. The SDF microscopy was used to assess the microvascular architecture in the center and border zone of the ADM and to compare the native, non-ADM-associated capsule in each patient as a within-subject control. No incidences of periprosthetic infection, explantation, or adverse events were reported after SDF image acquisition. Native capsules demonstrate a complex, layered architecture with an average vessel area density of 14.9 mm/mm and total vessel length density of 12.3 mm/mm. In contrast to native periprosthetic capsules, ADM-associated capsules are not uniformly vascularized structures and demonstrate 2 zones of microvascular architecture. The ADM and native capsule border zone demonstrates palisading peripheral vascular arcades with continuous antegrade flow. The central zone of the ADM demonstrates punctate perforating vascular plexi with intermittent, sluggish flow, and intervening 2- to 3-cm watershed zones. Sidestream darkfield microscopy allows for real-time intraoperative assessment of ADM revascularization and serves as a potential methodology to compare revascularization parameters among commercially available ADMs. Thr SDF microscopy demonstrates that the periprosthetic capsule in ADM-assisted implant-based breast reconstruction is not a uniformly vascularized structure.

  14. A generalized LSTM-like training algorithm for second-order recurrent neural networks

    PubMed Central

    Monner, Derek; Reggia, James A.

    2011-01-01

    The Long Short Term Memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM’s original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting it’s applicability to a small set of network architectures. Here we introduce the Generalized Long Short-Term Memory (LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. PMID:21803542

  15. Quantitative laser speckle flowmetry of the in vivo microcirculation using sidestream dark field microscopy

    PubMed Central

    Nadort, Annemarie; Woolthuis, Rutger G.; van Leeuwen, Ton G.; Faber, Dirk J.

    2013-01-01

    We present integrated Laser Speckle Contrast Imaging (LSCI) and Sidestream Dark Field (SDF) flowmetry to provide real-time, non-invasive and quantitative measurements of speckle decorrelation times related to microcirculatory flow. Using a multi exposure acquisition scheme, precise speckle decorrelation times were obtained. Applying SDF-LSCI in vitro and in vivo allows direct comparison between speckle contrast decorrelation and flow velocities, while imaging the phantom and microcirculation architecture. This resulted in a novel analysis approach that distinguishes decorrelation due to flow from other additive decorrelation sources. PMID:24298399

  16. Specification and Design of Electrical Flight System Architectures with SysML

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L., Jr.; Jimenez, Alejandro

    2012-01-01

    Modern space flight systems are required to perform more complex functions than previous generations to support space missions. This demand is driving the trend to deploy more electronics to realize system functionality. The traditional approach for the specification, design, and deployment of electrical system architectures in space flight systems includes the use of informal definitions and descriptions that are often embedded within loosely coupled but highly interdependent design documents. Traditional methods become inefficient to cope with increasing system complexity, evolving requirements, and the ability to meet project budget and time constraints. Thus, there is a need for more rigorous methods to capture the relevant information about the electrical system architecture as the design evolves. In this work, we propose a model-centric approach to support the specification and design of electrical flight system architectures using the System Modeling Language (SysML). In our approach, we develop a domain specific language for specifying electrical system architectures, and we propose a design flow for the specification and design of electrical interfaces. Our approach is applied to a practical flight system.

  17. Accountable Information Flow for Java-Based Web Applications

    DTIC Science & Technology

    2010-01-01

    runtime library Swift server runtime Java servlet framework HTTP Web server Web browser Figure 2: The Swift architecture introduced an open-ended...On the server, the Java application code links against Swift’s server-side run-time library, which in turn sits on top of the standard Java servlet ...AFRL-RI-RS-TR-2010-9 Final Technical Report January 2010 ACCOUNTABLE INFORMATION FLOW FOR JAVA -BASED WEB APPLICATIONS

  18. Neural architecture design based on extreme learning machine.

    PubMed

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Solubilization of poorly water-soluble compounds using amphiphilic phospholipid polymers with different molecular architectures.

    PubMed

    Mu, Mingwei; Konno, Tomohiro; Inoue, Yuuki; Ishihara, Kazuhiko

    2017-10-01

    To achieve stable and effective solubilization of poorly water-soluble bioactive compounds, water-soluble and amphiphilic polymers composed of hydrophilic 2-methacryloyloxyethyl phosphorylcholine (MPC) units and hydrophobic n-butyl methacrylate (BMA) units were prepared. MPC polymers having different molecular architectures, such as random-type monomer unit sequences and block-type sequences, formed polymer aggregates when they were dissolved in aqueous media. The structure of the random-type polymer aggregate was loose and flexible. On the other hand, the block-type polymer formed polymeric micelles, which were composed of very stable hydrophobic poly(BMA) cores and hydrophilic poly(MPC) shells. The solubilization of a poorly water-soluble bioactive compound, paclitaxel (PTX), in the polymer aggregates was observed, however, solubilizing efficiency and stability were strongly depended on the polymer architecture; in other words, PTX stayed in the poly(BMA) core of the polymer micelle formed by the block-type polymer even when plasma protein was present in the aqueous medium. On the other hand, when the random-type polymer was used, PTX was transferred from the polymer aggregate to the protein. We conclude that water-soluble and amphiphilic MPC polymers are good candidates as solubilizers for poorly water-soluble bioactive compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. His-Tag-Mediated Dimerization of Chemoreceptors Leads to Assembly of Functional Nanoarrays.

    PubMed

    Haglin, Elizabeth R; Yang, Wen; Briegel, Ariane; Thompson, Lynmarie K

    2017-11-07

    Transmembrane chemotaxis receptors are found in bacteria in extended hexagonal arrays stabilized by the membrane and by cytosolic binding partners, the kinase CheA and coupling protein CheW. Models of array architecture and assembly propose receptors cluster into trimers of dimers that associate with one CheA dimer and two CheW monomers to form the minimal "core unit" necessary for signal transduction. Reconstructing in vitro chemoreceptor ternary complexes that are homogeneous and functional and exhibit native architecture remains a challenge. Here we report that His-tag-mediated receptor dimerization with divalent metals is sufficient to drive assembly of nativelike functional arrays of a receptor cytoplasmic fragment. Our results indicate receptor dimerization initiates assembly and precedes formation of ternary complexes with partial kinase activity. Restoration of maximal kinase activity coincides with a shift to larger complexes, suggesting that kinase activity depends on interactions beyond the core unit. We hypothesize that achieving maximal activity requires building core units into hexagons and/or coalescing hexagons into the extended lattice. Overall, the minimally perturbing His-tag-mediated dimerization leads to assembly of chemoreceptor arrays with native architecture and thus serves as a powerful tool for studying the assembly and mechanism of this complex and other multiprotein complexes.

  1. A novel VLSI processor architecture for supercomputing arrays

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  2. Development and Flight Testing of an Autonomous Landing Gear Health-Monitoring System

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Coffey, Neil C.; Gonzalez, Guillermo A.; Taylor, B. Douglas; Brett, Rube R.; Woodman, Keith L.; Weathered, Brenton W.; Rollins, Courtney H.

    2003-01-01

    Development and testing of an adaptable vehicle health-monitoring architecture is presented. The architecture is being developed for a fleet of vehicles. It has three operational levels: one or more remote data acquisition units located throughout the vehicle; a command and control unit located within the vehicle; and, a terminal collection unit to collect analysis results from all vehicles. Each level is capable of performing autonomous analysis with a trained expert system. Communication between all levels is done with wireless radio frequency interfaces. The remote data acquisition unit has an eight channel programmable digital interface that allows the user discretion for choosing type of sensors; number of sensors, sensor sampling rate and sampling duration for each sensor. The architecture provides framework for a tributary analysis. All measurements at the lowest operational level are reduced to provide analysis results necessary to gauge changes from established baselines. These are then collected at the next level to identify any global trends or common features from the prior level. This process is repeated until the results are reduced at the highest operational level. In the framework, only analysis results are forwarded to the next level to reduce telemetry congestion. The system's remote data acquisition hardware and non-analysis software have been flight tested on the NASA Langley B757's main landing gear. The flight tests were performed to validate the following: the wireless radio frequency communication capabilities of the system, the hardware design, command and control; software operation; and, data acquisition, storage and retrieval.

  3. Engineering controllable architecture in matrigel for 3D cell alignment.

    PubMed

    Jang, Jae Myung; Tran, Si-Hoai-Trung; Na, Sang Cheol; Jeon, Noo Li

    2015-02-04

    We report a microfluidic approach to impart alignment in ECM components in 3D hydrogels by continuously applying fluid flow across the bulk gel during the gelation process. The microfluidic device where each channel can be independently filled was tilted at 90° to generate continuous flow across the Matrigel as it gelled. The presence of flow helped that more than 70% of ECM components were oriented along the direction of flow, compared with randomly cross-linked Matrigel. Following the oriented ECM components, primary rat cortical neurons and mouse neural stem cells showed oriented outgrowth of neuronal processes within the 3D Matrigel matrix.

  4. Sandstone-body and shale-body dimensions in a braided fluvial system: Salt wash sandstone member (Morrison formation), Garfield County, Utah

    USGS Publications Warehouse

    Robinson, J.W.; McCabea, P.J.

    1997-01-01

    Excellent three-dimensional exposures of the Upper Jurassic Salt Wash Sandstone Member of the Morrison Formation in the Henry Mountains area of southern Utah allow measurement of the thickness and width of fluvial sandstone and shale bodies from extensive photomosaics. The Salt Wash Sandstone Member is composed of fluvial channel fill, abandoned channel fill, and overbank/flood-plain strata that were deposited on a broad alluvial plain of low-sinuosity, sandy, braided streams flowing northeast. A hierarchy of sandstone and shale bodies in the Salt Wash Sandstone Member includes, in ascending order, trough cross-bedding, fining-upward units/mudstone intraclast conglomerates, singlestory sandstone bodies/basal conglomerate, abandoned channel fill, multistory sandstone bodies, and overbank/flood-plain heterolithic strata. Trough cross-beds have an average width:thickness ratio (W:T) of 8.5:1 in the lower interval of the Salt Wash Sandstone Member and 10.4:1 in the upper interval. Fining-upward units are 0.5-3.0 m thick and 3-11 m wide. Single-story sandstone bodies in the upper interval are wider and thicker than their counterparts in the lower interval, based on average W:T, linear regression analysis, and cumulative relative frequency graphs. Multistory sandstone bodies are composed of two to eight stories, range up to 30 m thick and over 1500 m wide (W:T > 50:1), and are also larger in the upper interval. Heterolithic units between sandstone bodies include abandoned channel fill (W:T = 33:1) and overbank/flood-plain deposits (W:T = 70:1). Understanding W:T ratios from the component parts of an ancient, sandy, braided stream deposit can be applied in several ways to similar strata in other basins; for example, to (1) determine the width of a unit when only the thickness is known, (2) create correlation guidelines and maximum correlation lengths, (3) aid in interpreting the controls on fluvial architecture, and (4) place additional constraints on input variables to stratigraphie and fluid-flow modeling. The usefulness of these types of data demonstrates the need to develop more data sets from other depositional environments.

  5. 59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    59. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-6 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 4-A, 5TH & 5-A. DRAWING NO. AL-6 - SHEET 7 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  6. Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities

    DTIC Science & Technology

    1993-09-01

    city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the

  7. Computer Architecture for Energy Efficient SFQ

    DTIC Science & Technology

    2014-08-27

    IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit

  8. A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU

    NASA Astrophysics Data System (ADS)

    Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha

    2018-03-01

    Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.

  9. Advanced I&C for Fault-Tolerant Supervisory Control of Small Modular Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Daniel G.

    In this research, we have developed a supervisory control approach to enable automated control of SMRs. By design the supervisory control system has an hierarchical, interconnected, adaptive control architecture. A considerable advantage to this architecture is that it allows subsystems to communicate at different/finer granularity, facilitates monitoring of process at the modular and plant levels, and enables supervisory control. We have investigated the deployment of automation, monitoring, and data collection technologies to enable operation of multiple SMRs. Each unit's controller collects and transfers information from local loops and optimize that unit’s parameters. Information is passed from the each SMR unitmore » controller to the supervisory controller, which supervises the actions of SMR units and manage plant processes. The information processed at the supervisory level will provide operators the necessary information needed for reactor, unit, and plant operation. In conjunction with the supervisory effort, we have investigated techniques for fault-tolerant networks, over which information is transmitted between local loops and the supervisory controller to maintain a safe level of operational normalcy in the presence of anomalies. The fault-tolerance of the supervisory control architecture, the network that supports it, and the impact of fault-tolerance on multi-unit SMR plant control has been a second focus of this research. To this end, we have investigated the deployment of advanced automation, monitoring, and data collection and communications technologies to enable operation of multiple SMRs. We have created a fault-tolerant multi-unit SMR supervisory controller that collects and transfers information from local loops, supervise their actions, and adaptively optimize the controller parameters. The goal of this research has been to develop the methodologies and procedures for fault-tolerant supervisory control of small modular reactors. To achieve this goal, we have identified the following objectives. These objective are an ordered approach to the research: I) Development of a supervisory digital I&C system II) Fault-tolerance of the supervisory control architecture III) Automated decision making and online monitoring.« less

  10. Flow-Control Unit For Nitrogen And Hydrogen Gases

    NASA Technical Reports Server (NTRS)

    Chang, B. J.; Novak, D. W.

    1990-01-01

    Gas-flow-control unit installed and removed as one piece replaces system that included nine separately serviced components. Unit controls and monitors flows of nitrogen and hydrogen gases. Designed for connection via fluid-interface manifold plate, reducing number of mechanical fluid-interface connections from 18 to 1. Unit provides increasing reliability, safety, and ease of maintenance, and for reducing weight, volume, and power consumption.

  11. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    NASA Astrophysics Data System (ADS)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The Cedar Creek watershed (CCW) in northeastern Indiana, USA was selected for application of the OMS3-based AgroEcoSystem-Watershed (AgES-W) model. AgES-W performance for stream flow and N loading was assessed using Nash-Sutcliffe model efficiency (ENS) and percent bias (PBIAS) model evaluation statistics. Comparisons of daily and average monthly simulated and observed stream flow and N loads for the 1997-2005 simulation period resulted in PBIAS and ENS values that were similar or better than those reported in the literature for SWAT stream flow and N loading predictions at a similar scale. The results show that the AgES-W model was able to reproduce the hydrological and N dynamics of the CCW with sufficient quality, and should serve as a foundation upon which to better quantify additional water quality indicators (e.g., sediment transport and P dynamics) at the watershed scale.

  12. NASA Lewis Steady-State Heat Pipe Code Architecture

    NASA Technical Reports Server (NTRS)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  13. Particle Laden Turbulence in a Radiation Environment Using a Portable High Preformace Solver Based on the Legion Runtime System

    NASA Astrophysics Data System (ADS)

    Torres, Hilario; Iaccarino, Gianluca

    2017-11-01

    Soleil-X is a multi-physics solver being developed at Stanford University as a part of the Predictive Science Academic Alliance Program II. Our goal is to conduct high fidelity simulations of particle laden turbulent flows in a radiation environment for solar energy receiver applications as well as to demonstrate our readiness to effectively utilize next generation Exascale machines. The novel aspect of Soleil-X is that it is built upon the Legion runtime system to enable easy portability to different parallel distributed heterogeneous architectures while also being written entirely in high-level/high-productivity languages (Ebb and Regent). An overview of the Soleil-X software architecture will be given. Results from coupled fluid flow, Lagrangian point particle tracking, and thermal radiation simulations will be presented. Performance diagnostic tools and metrics corresponding the the same cases will also be discussed. US Department of Energy, National Nuclear Security Administration.

  14. SDN-controlled topology-reconfigurable optical mobile fronthaul architecture for bidirectional CoMP and low latency inter-cell D2D in the 5G mobile era.

    PubMed

    Cvijetic, Neda; Tanaka, Akihiro; Kanonakis, Konstantinos; Wang, Ting

    2014-08-25

    We demonstrate the first SDN-controlled optical topology-reconfigurable mobile fronthaul (MFH) architecture for bidirectional coordinated multipoint (CoMP) and low latency inter-cell device-to-device (D2D) connectivity in the 5G mobile networking era. SDN-based OpenFlow control is used to dynamically instantiate the CoMP and inter-cell D2D features as match/action combinations in control plane flow tables of software-defined optical and electrical switching elements. Dynamic re-configurability is thereby introduced into the optical MFH topology, while maintaining back-compatibility with legacy fiber deployments. 10 Gb/s peak rates with <7 μs back-to-back transmission latency and 29.6 dB total power budget are experimentally demonstrated, confirming the attractiveness of the new approach for optical MFH of future 5G mobile systems.

  15. Principles of Biomimetic Vascular Network Design Applied to a Tissue-Engineered Liver Scaffold

    PubMed Central

    Hoganson, David M.; Pryor, Howard I.; Spool, Ira D.; Burns, Owen H.; Gilmore, J. Randall

    2010-01-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow. PMID:20001254

  16. Principles of biomimetic vascular network design applied to a tissue-engineered liver scaffold.

    PubMed

    Hoganson, David M; Pryor, Howard I; Spool, Ira D; Burns, Owen H; Gilmore, J Randall; Vacanti, Joseph P

    2010-05-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow.

  17. Lava flow field emplacement studies of Manua Ulu (Kilauea Volcano, Hawai'i, United States) and Venus, using field and remote sensing analyses

    NASA Astrophysics Data System (ADS)

    Byrnes, Jeffrey Myer

    2002-04-01

    This work examines lava emplacement processes by characterizing surface units using field and remote sensing analyses in order to understand the development of lava flow fields. Specific study areas are the 1969--1974 Mauna Ulu compound flow field, (Kilauea Volcano, Hawai'i, USA), and five lava flow fields on Venus: Turgmam Fluctus, Zipaltonal Fluctus, the Tuli Mons/Uilata Fluctus flow complex, the Var Mons flow field, and Mylitta Fluctus. Lava surface units have been examined in the field and with visible-, thermal-, and radar-wavelength remote sensing datasets for Mauna Ulu, and with radar data for the Venusian study areas. For the Mauna Ulu flow field, visible characteristics are related to color, glass abundance, and dm- to m-scale surface irregularities, which reflect the lava flow regime, cooling, and modification due to processes such as coalescence and inflation. Thermal characteristics are primarily affected by the abundance of glass and small-scale roughness elements (such as vesicles), and reflect the history of cooling, vesiculation and degassing, and crystallization of the lava. Radar characteristics are primarily affected by unit topography and fracturing, which are related to flow inflation, remobilization, and collapse, and reflect the local supply of lava during and after unit emplacement. Mauna Ulu surface units are correlated with pre-eruption topography, lack a simple relationship to the main feeder lava tubes, and are distributed with respect to their position within compound flow lobes and with distance from the vent. The Venusian lava flow fields appear to have developed through emplacement of numerous, thin, simple and compound flows, presumably over extended periods of time, and show a wider range of radar roughness than is observed at Mauna Ulu. A potential correlation is suggested between flow rheology and surface roughness. Distributary flow morphologies may result from tube-fed flows, and flow inflation is consistent with observed surface characteristics. Furthermore, the significance of inflation at Mauna Ulu and comparison of radar characteristics indicates that inflation may, in fact, be more prevalent on Venus than at Mauna Ulu. Although the Venusian flow fields display morphologies similar to those observed within terrestrial flow fields, the Venusian flow units are significantly larger.

  18. A new method to synthesize complicated multi-branched carbon nanotubes with controlled architecture and composition.

    PubMed

    Wei, Dacheng; Liu, Yunqi; Cao, Lingchao; Fu, Lei; Li, Xianglong; Wang, Yu; Yu, Gui; Zhu, Daoben

    2006-02-01

    Here we develop a simple method by using flow fluctuation to synthesize arrays of multi-branched carbon nanotubes (CNTs) that are far more complex than those previously reported. The architectures and compositions can be well controlled, thus avoiding any template or additive. A branching mechanism of fluctuation-promoted coalescence of catalyst particles is proposed. This finding will provide a hopeful approach to the goal of CNT-based integrated circuits and be valuable for applying branched junctions in nanoelectronics and producing branched junctions of other materials.

  19. Neural network river forecasting through baseflow separation and binary-coded swarm optimization

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie

    2015-10-01

    The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.

  20. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  1. Formation of Monocrystalline 1D and 2D Architectures via Epitaxial Attachment: Bottom-Up Routes through Surfactant-Mediated Arrays of Oriented Nanocrystals.

    PubMed

    Nakagawa, Yoshitaka; Kageyama, Hiroyuki; Oaki, Yuya; Imai, Hiroaki

    2015-06-09

    Monocrystalline architectures with well-defined shapes were achieved by bottom-up routes through epitaxial attachment of Mn3O4 nanocrystals. The crystallographically continuous 1D chains elongated in the a axis and 2D panels having large a or c faces were obtained by removal of the organic mediator from surfactant-mediated 1D and 2D arrays of Mn3O4 nanocrystals, respectively. Our basal approach indicates that the epitaxial attachment through the surfactant-mediated arrays is utilized for fabrication of a wide variety of micrometric architectures from nanometric crystalline units.

  2. Physical properties of lava flows on the southwest flank of Tyrrhena Patera, Mars

    NASA Technical Reports Server (NTRS)

    Crown, David A.; Porter, Tracy K.; Greeley, Ronald

    1991-01-01

    Tyrrhena Patera (TP) (22 degrees S, 253.5 degrees W), a large, low-relief volcano located in the ancient southern highlands of Mars, is one of four highland paterae thought to be structurally associated with the Hellas basin. The highland paterae are Hesperian in age and among the oldest central vent volcanoes on Mars. The morphology and distribution of units in the eroded shield of TP are consistent with the emplacement of pyroclastic flows. A large flank unit extending from TP to the SW contains well-defined lava flow lobes and leveed channels. This flank unit is the first definitive evidence of effusive volcanic activity associated with the highland paterae and may include the best preserved lava flows observed in the Southern Hemisphere of Mars. Flank flow unit averages, channelized flow, flow thickness, and yield strength estimates are discussed. Analysis suggests the temporal evolution of Martian magmas.

  3. Revealing flow behaviors of metallic glass based on activation of flow units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, T. P.; Wang, W. H.; Bai, H. Y., E-mail: hybai@iphy.ac.cn

    2016-05-28

    Atomic level flow plays a critical role in the mechanical behavior of metallic glass (MG) while the connection between the flow and the heterogeneous microstructure of the glass remains unclear. We describe the heterogeneity of MGs as the elastic matrix with “inclusions” of nano-scale liquid-like flow units, and the plastic flow behavior of MGs is considered to be accommodated by the flow units. We show that the model can explain the various deformation behaviors, the transformation from inhomogeneous deformation to homogeneous flow upon strain rate or temperature, and the deformation map in MGs, which might provide insights into the flowmore » mechanisms in glasses and inspiration for improving the plasticity of MGs.« less

  4. Knowledge Innovation System: The Common Language.

    ERIC Educational Resources Information Center

    Rogers, Debra M. Amidon

    1993-01-01

    The Knowledge Innovation System is a management technique in which a networked enterprise uses knowledge flow as a collaborative advantage. Enterprise Management System-Architecture, which can be applied to collaborative activities, has five domains: economic, sociological, psychological, managerial, and technological. (SK)

  5. Influence of Additive Manufactured Scaffold Architecture on the Distribution of Surface Strains and Fluid Flow Shear Stresses and Expected Osteochondral Cell Differentiation.

    PubMed

    Hendrikson, Wim J; Deegan, Anthony J; Yang, Ying; van Blitterswijk, Clemens A; Verdonschot, Nico; Moroni, Lorenzo; Rouwkema, Jeroen

    2017-01-01

    Scaffolds for regenerative medicine applications should instruct cells with the appropriate signals, including biophysical stimuli such as stress and strain, to form the desired tissue. Apart from that, scaffolds, especially for load-bearing applications, should be capable of providing mechanical stability. Since both scaffold strength and stress-strain distributions throughout the scaffold depend on the scaffold's internal architecture, it is important to understand how changes in architecture influence these parameters. In this study, four scaffold designs with different architectures were produced using additive manufacturing. The designs varied in fiber orientation, while fiber diameter, spacing, and layer height remained constant. Based on micro-CT (μCT) scans, finite element models (FEMs) were derived for finite element analysis (FEA) and computational fluid dynamics (CFD). FEA of scaffold compression was validated using μCT scan data of compressed scaffolds. Results of the FEA and CFD showed a significant impact of scaffold architecture on fluid shear stress and mechanical strain distribution. The average fluid shear stress ranged from 3.6 mPa for a 0/90 architecture to 6.8 mPa for a 0/90 offset architecture, and the surface shear strain from 0.0096 for a 0/90 offset architecture to 0.0214 for a 0/90 architecture. This subsequently resulted in variations of the predicted cell differentiation stimulus values on the scaffold surface. Fluid shear stress was mainly influenced by pore shape and size, while mechanical strain distribution depended mainly on the presence or absence of supportive columns in the scaffold architecture. Together, these results corroborate that scaffold architecture can be exploited to design scaffolds with regions that guide specific tissue development under compression and perfusion. In conjunction with optimization of stimulation regimes during bioreactor cultures, scaffold architecture optimization can be used to improve scaffold design for tissue engineering purposes.

  6. Influence of Additive Manufactured Scaffold Architecture on the Distribution of Surface Strains and Fluid Flow Shear Stresses and Expected Osteochondral Cell Differentiation

    PubMed Central

    Hendrikson, Wim J.; Deegan, Anthony J.; Yang, Ying; van Blitterswijk, Clemens A.; Verdonschot, Nico; Moroni, Lorenzo; Rouwkema, Jeroen

    2017-01-01

    Scaffolds for regenerative medicine applications should instruct cells with the appropriate signals, including biophysical stimuli such as stress and strain, to form the desired tissue. Apart from that, scaffolds, especially for load-bearing applications, should be capable of providing mechanical stability. Since both scaffold strength and stress–strain distributions throughout the scaffold depend on the scaffold’s internal architecture, it is important to understand how changes in architecture influence these parameters. In this study, four scaffold designs with different architectures were produced using additive manufacturing. The designs varied in fiber orientation, while fiber diameter, spacing, and layer height remained constant. Based on micro-CT (μCT) scans, finite element models (FEMs) were derived for finite element analysis (FEA) and computational fluid dynamics (CFD). FEA of scaffold compression was validated using μCT scan data of compressed scaffolds. Results of the FEA and CFD showed a significant impact of scaffold architecture on fluid shear stress and mechanical strain distribution. The average fluid shear stress ranged from 3.6 mPa for a 0/90 architecture to 6.8 mPa for a 0/90 offset architecture, and the surface shear strain from 0.0096 for a 0/90 offset architecture to 0.0214 for a 0/90 architecture. This subsequently resulted in variations of the predicted cell differentiation stimulus values on the scaffold surface. Fluid shear stress was mainly influenced by pore shape and size, while mechanical strain distribution depended mainly on the presence or absence of supportive columns in the scaffold architecture. Together, these results corroborate that scaffold architecture can be exploited to design scaffolds with regions that guide specific tissue development under compression and perfusion. In conjunction with optimization of stimulation regimes during bioreactor cultures, scaffold architecture optimization can be used to improve scaffold design for tissue engineering purposes. PMID:28239606

  7. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  8. Depositional architecture and evolution of the Late Miocene slope channel-fan-system in the northeastern shelf-margin of South China Sea

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Lin, Changsong; Zhang, Zhongtao; Tian, Hongxun; Tao, Ze; Liu, Hanyao

    2016-04-01

    The Upper Miocene in the Pearl River Mouth Basin of northwestern shelf-margin of South China Sea Basin contains a series of slope channel - fan systems. Their depositional architecture and evolution are documented in this investigation based on an integrated analysis of cores, logs, and seismic data. Four depositional-palaeogeomorphological elements have been identified in the slope channel-fan systems as follows: broad, shallow and unconfined or partly confined outer-shelf to shelf-break channels; deeply incised and confined unidirectionally migrating slope channels; broad or U-shaped, unconfined erosional-depositional channels; frontal splays-lobes and nonchannelized sheets. The slope channels are mostly oriented NW-SE, which migrated unidirectionally northeastwards and intensively eroded almost the whole shelf-slope zone. The channel infillings are mainly mudstones, interbedded with siltstones. They might be formed by gravity flow erosion as bypassing channels. They were filled with limited gravity flow sediments at the base and mostly filled with lateral accretionary packages of bottom current deposits. At the end of the channels, a series of small-scale slope fans developed and coalesced into fan aprons along the base of the slope. The unconfined erosional-depositional channels at the upper parts of the fan-apron-systems display compound infill patterns, and commonly have concave erosional bases and convex tops. The frontal splays-lobes representing middle to distal deposits of fan-apron-systems have flat-mounded or gull-wing geometries, and the internal architectures include bidirectional downlap, progradation, and chaotic infillings. The distal nonchannelized turbidite sheets are characterized by thin-bedded, parallel to sub-parallel sheet-like geometries. Three major unconformities or obvious erosional surfaces in the channel-fan systems of the Upper Miocene are recognized, and indicate the falling of sea-level. The depositional architecture of sequences varies from the upper slope to the slope base transitional to basin plain. The basal erosion and the unidirectionally migrating characters of the slope channels were supposed to be the result of the interaction of bottom currents and gravity flows. The intensive development of the channel-fan systems over the shelf slope might be related to the Dongsha Tectonic uplift which may resulted in stepped slope and concomitantly intensified gravity flow in the study area in Late Miocene.

  9. The computational structural mechanics testbed architecture. Volume 2: The interface

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the third set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 3 describes the CLIP-Processor interface and related topics. It is intended only for processor developers.

  10. The computational structural mechanics testbed architecture. Volume 1: The language

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the first set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP, and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 1 presents the basic elements of the CLAMP language and is intended for all users.

  11. Interband cascade (IC) photovoltaic (PV) architecture for PV devices

    DOEpatents

    Yang, Rui Q.; Tian, Zhaobing; Mishima, Tetsuya D.; Santos, Michael B.; Johnson, Matthew B.; Klem, John F.

    2015-10-20

    A photovoltaic (PV) device, comprising a PV interband cascade (IC) stage, wherein the IC PV stage comprises an absorption region with a band gap, the absorption region configured to absorb photons, an intraband transport region configured to act as a hole barrier, and an interband tunneling region configured to act as an electron barrier. An IC PV architecture for a photovoltaic device, the IC PV architecture comprising an absorption region, an intraband transport region coupled to the absorption region, and an interband tunneling region coupled to the intraband transport region and to the adjacent absorption region, wherein the absorption region, the intraband transport region, and the interband tunneling region are positioned such that electrons will flow from the absorption region to the intraband transport region to the interband tunneling region.

  12. The computational structural mechanics testbed architecture. Volume 2: Directives

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1989-01-01

    This is the second of a set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language (CLAMP), the command language interpreter (CLIP), and the data manager (GAL). Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 2 describes the CLIP directives in detail. It is intended for intermediate and advanced users.

  13. Pursuing realistic hydrologic model under SUPERFLEX framework in a semi-humid catchment in China

    NASA Astrophysics Data System (ADS)

    Wei, Lingna; Savenije, Hubert H. G.; Gao, Hongkai; Chen, Xi

    2016-04-01

    Model realism is pursued perpetually by hydrologists for flood and drought prediction, integrated water resources management and decision support of water security. "Physical-based" distributed hydrologic models are speedily developed but they also encounter unneglectable challenges, for instance, computational time with low efficiency and parameters uncertainty. This study step-wisely tested four conceptual hydrologic models under the framework of SUPERFLEX in a small semi-humid catchment in southern Huai River basin of China. The original lumped FLEXL has hypothesized model structure of four reservoirs to represent canopy interception, unsaturated zone, subsurface flow of fast and slow components and base flow storage. Considering the uneven rainfall in space, the second model (FLEXD) is developed with same parameter set for different rain gauge controlling units. To reveal the effect of topography, terrain descriptor of height above the nearest drainage (HAND) combined with slope is applied to classify the experimental catchment into two landscapes. Then the third one (FLEXTOPO) builds different model blocks in consideration of the dominant hydrologic process corresponding to the topographical condition. The fourth one named FLEXTOPOD integrating the parallel framework of FLEXTOPO in four controlled units is designed to interpret spatial variability of rainfall patterns and topographic features. Through pairwise comparison, our results suggest that: (1) semi-distributed models (FLEXD and FLEXTOPOD) taking precipitation spatial heterogeneity into account has improved model performance with parsimonious parameter set, and (2) hydrologic model architecture with flexibility to reflect perceived dominant hydrologic processes can include the local terrain circumstances for each landscape. Hence, the modeling actions are coincided with the catchment behaviour and close to the "reality". The presented methodology is regarding hydrologic model as a tool to test our hypothesis and deepen our understanding of hydrologic processes, which will be helpful to improve modeling realism.

  14. Spot test kit for explosives detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pagoria, Philip F; Whipple, Richard E; Nunes, Peter J

    An explosion tester system comprising a body, a lateral flow membrane swab unit adapted to be removeably connected to the body, a first explosives detecting reagent, a first reagent holder and dispenser operatively connected to the body, the first reagent holder and dispenser containing the first explosives detecting reagent and positioned to deliver the first explosives detecting reagent to the lateral flow membrane swab unit when the lateral flow membrane swab unit is connected to the body, a second explosives detecting reagent, and a second reagent holder and dispenser operatively connected to the body, the second reagent holder and dispensermore » containing the second explosives detecting reagent and positioned to deliver the second explosives detecting reagent to the lateral flow membrane swab unit when the lateral flow membrane swab unit is connected to the body.« less

  15. Branched terthiophenes in organic electronics: from small molecules to polymers.

    PubMed

    Scheuble, Martin; Goll, Miriam; Ludwigs, Sabine

    2015-01-01

    A zoo of chemical structures is accessible when the branched unit 2,2':3',2″-terthiophene (3T) is included both in structurally well-defined small molecules and polymer-like architectures. The first part of this review article highlights literature on all-thiophene based branched oligomers including dendrimers as well as combinations of 3T-units with functional moieties for light-harvesting systems. Motivated by the perfectly branched macromolecular dendrimers both electropolymerization as well as chemical approaches are presented as methods for the preparation of branched polythiophenes with different branching densities. Structure-function relationships between the molecular architecture and optical and electronic properties are discussed throughout the article. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Model of a programmable quantum processing unit based on a quantum transistor effect

    NASA Astrophysics Data System (ADS)

    Ablayev, Farid; Andrianov, Sergey; Fetisov, Danila; Moiseev, Sergey; Terentyev, Alexandr; Urmanchev, Andrey; Vasiliev, Alexander

    2018-02-01

    In this paper we propose a model of a programmable quantum processing device realizable with existing nano-photonic technologies. It can be viewed as a basis for new high performance hardware architectures. Protocols for physical implementation of device on the controlled photon transfer and atomic transitions are presented. These protocols are designed for executing basic single-qubit and multi-qubit gates forming a universal set. We analyze the possible operation of this quantum computer scheme. Then we formalize the physical architecture by a mathematical model of a Quantum Processing Unit (QPU), which we use as a basis for the Quantum Programming Framework. This framework makes it possible to perform universal quantum computations in a multitasking environment.

  17. Multi-processor including data flow accelerator module

    DOEpatents

    Davidson, George S.; Pierce, Paul E.

    1990-01-01

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  18. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  19. 57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    57. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-3 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 1 FLOOR AND 1sr FLOOR ROOF. DRAWING NO. AL-3 - SHEET 4 OF 21. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  20. 58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    58. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21" Space Command. AL-5 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - LAYOUT 3RD, 3A, 4TH LEVELS. DRAWING NO. AL-5 - SHEET 6 OF 21 - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  1. Musculoskeletal Geometry, Muscle Architecture and Functional Specialisations of the Mouse Hindlimb (Open Access)

    DTIC Science & Technology

    2016-04-26

    Cappellari1, Andrew J. Spence2,3, John R. Hutchinson2, Dominic J. Wells1* 1 Neuromuscular Diseases Group, Comparative Biomedical Sciences, Royal...Veterinary College, 4 Royal College Street, London, NW1 0TU, United Kingdom, 2 Structure and Motion Lab, Comparative Biomedical Sciences, Royal Veterinary...or comparing the rela- tive effects of architecture and fibre types on determining contractile properties [11], rather than their geometry or

  2. General engineering specifications for 6000 tpd SRC-I Demonstration Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This volume contains specifications for architectural features of buildings for the SRC-1 Demonstration Plant: skylights, ventilators, sealants, doors, mirrors, furring and lathing, gypsum plaster, lightweight plaster, wallboard, ceramic tile, acoustic ceiling systems, resilient flooring, carpeting, brick flooring, architectural painting, vinyl wall covering, chalkboards, tackboards, toilets, access flooring, lockers, partitions, washroom accessories, unit kitchens, dock levels, seals, shelters, custom casework, auditorium seats, drapery tacks, prefabricated buildings, stairs, elevators, shelves, etc. (LTN).

  3. Evaluation of the Impact of an Additive Manufacturing Enhanced CubeSat Architecture on the CubeSat Development Process

    DTIC Science & Technology

    2016-09-15

    EVALUATION OF THE IMPACT OF AN ADDITIVE MANUFACTURING ENHANCED CUBESAT ARCHITECTURE ON THE CUBESAT...DEVELOPMENT PROCESS THESIS Rachel E. Sharples AFIT-ENV-MS-16-S-049 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF...The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force

  4. A novel digital pulse processing architecture for nuclear instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moline, Yoann; Thevenin, Mathieu; Corre, Gwenole

    The field of nuclear instrumentation covers a wide range of applications, including counting, spectrometry, pulse shape discrimination and multi-channel coincidence. These applications are the topic of many researches, new algorithms and implementations are constantly proposed thanks to advances in digital signal processing. However, these improvements are not yet implemented in instrumentation devices. This is especially true for neutron-gamma discrimination applications which traditionally use charge comparison method while literature proposes other algorithms based on frequency domain or wavelet theory which show better performances. Another example is pileups which are generally rejected while pileup correction algorithms also exist. These processes are traditionallymore » performed offline due to two issues. The first is the Poissonian characteristic of the signal, composed of random arrival pulses which requires to current architectures to work in data flow. The second is the real-time requirement, which implies losing pulses when the pulse rate is too high. Despite the possibility of treating the pulses independently from each other, current architectures paralyze the acquisition of the signal during the processing of a pulse. This loss is called dead-time. These two issues have led current architectures to use dedicated solutions based on re-configurable components like Field Programmable Gate Arrays (FPGAs) to overcome the need of performance necessary to deal with dead-time. However, dedicated hardware algorithm implementations on re-configurable technologies are complex and time-consuming. For all these reasons, a programmable Digital pulse Processing (DPP) architecture in a high level language such as Cor C++ which can reduce dead-time would be worthwhile for nuclear instrumentation. This would reduce prototyping and test duration by reducing the level of hardware expertise to implement new algorithms. However, today's programmable solutions do not meet the need of performance to operate online and not allow scaling with the increase in the number of measurement channel. That is why an innovative DPP architecture is proposed in this paper. This architecture is able to overcome dead-time while being programmable and is flexible with the number of measurement channel. Proposed architecture is based on an innovative execution model for pulse processing applications which can be summarized as follow. The signal is not composed of pulses only, consequently, pulses processing does not have to operate on the entire signal. Therefore, the first step of our proposal is pulse extraction by the use of dedicated components named pulse extractors. The triggering step can be achieved after the analog-to-digital conversion without any signal shaping or filtering stages. Pileup detection and accurate pulse time stamping are done at this stage. Any application downstream this step can work on adaptive variable-sized array of samples simplifying pulse processing methods. Then, once the data flow is broken, it is possible to distribute pulses on Functional Units (FUs) which perform processing. As the date of each pulse is known, they can be processed individually out-of-order to provide the results. To manage the pulses distribution, a scheduler and an interconnection network are used. pulses are distributed on the first FU which is not busy without congesting the interconnection network. For this reason, the process duration does not result anymore in dead-time if there are enough FUs. FUs are designed to be standalone and to comprises at least a programmable general purpose processor (ARM, Microblaze) allowing the implementation of complex algorithms without any modification of the hardware. An acquisition chain is composed of a succession of algorithms which lead to organize our FUs as a software macro-pipeline, A simple approach consists in assigning one algorithm per FU. Consequently, the global latency becomes the worst latency of algorithms execution on FU. Moreover, as algorithms are executed locally - i.e. on a FU - this approach limits shared memory requirement. To handle multichannel, we propose FUs sharing, this approach maximize the chance to find a non-busy FU to process an incoming pulse. This is possible since each channel receive random event independently, the pulse extractors associated to them do not necessarily need to access simultaneously to all Computing resources at the same time to distribute their pulses. The major contribution of this paper is the proposition of an execution model and its associated hardware programmable architecture for digital pulse processing that can handle multiple acquisition channels while maintaining the scalability thanks to the use of shared resources. This execution model and associated architecture are validated by simulation of a cycle accurate architecture SystemC model. Proposed architecture shows promising results in terms of scalability while maintaining zero dead-time. This work also permit the sizing of hardware resources requirement required for a predefined set of applications. Future work will focus on the interconnection network and a scheduling policy that can exploit the variable-length of pulses. Then, the hardware implementation of this architecture will be performed and tested for a representative set of application.« less

  5. Facies analysis of the Balta Formation: Evidence for a large late Miocene fluvio-deltaic system in the East Carpathian Foreland

    NASA Astrophysics Data System (ADS)

    Matoshko, Anton; Matoshko, Andrei; de Leeuw, Arjan; Stoica, Marius

    2016-08-01

    Deposits of the Balta Fm are preserved in a large arcuate sediment body that covers about 60,000 km2 and is up to 350 m thick. The Balta Fm spans ca. 5 Ma as constrained by underlying Tortonian (Bessarabian) and overlying Messinian (early Pontian) Paratethys strata. It contains frequent terrestrial mammal fossils and fresh- as well as brackish-water (Paratethys) molluscs and ostracods. Over the past 140 years our understanding of the sedimentary architecture of the formation and its origins has remained in its infancy, which has limited insight into the evolution of the East Carpathian Foreland. Here, we provide the first modern sedimentary facies analysis of the Balta Fm, which is integrated with an extensive review of previously published local literature. It is supported with micropalaeontological results and a wealth of historical borehole information. We show that the Balta Fm has a tripartite vertical division. Its lowermost part is clay dominated and consists of subordinate delta front sand bodies interspersed between muds. The middle unit contains separate delta plain channels or channel belts encased in thick muds. These are overlain by a unit with amalgamated delta plain channel deposits with only minor amounts of associated mud. The abundance of upper flow regime sedimentary structures in channel sands, the absence of peats (or coals) and the presence of calcareous nodules suggest a strongly seasonal and relatively dry climate with a flashy discharge regime. Deposition of the Balta Fm in an area previously characterized by distal shelf and prodelta environments indicates large-scale progradation triggered by high sediment volume from the uplifting Carpathian Orogen and enhanced by a general lowering of Paratethys sea-level. The tripartite internal architecture of the Balta Fm indicates that progradation continued during deposition. Its wedge-shaped geometry suggests that tectonic activity in the Carpathians generated a 300 km wide foreland basin that allowed for significant delta-plain aggradation despite of the generally regressive trend in Paratethys sea-level.

  6. Kinetic Inductance Memory Cell and Architecture for Superconducting Computers

    NASA Astrophysics Data System (ADS)

    Chen, George J.

    Josephson memory devices typically use a superconducting loop containing one or more Josephson junctions to store information. The magnetic inductance of the loop in conjunction with the Josephson junctions provides multiple states to store data. This thesis shows that replacing the magnetic inductor in a memory cell with a kinetic inductor can lead to a smaller cell size. However, magnetic control of the cells is lost. Thus, a current-injection based architecture for a memory array has been designed to work around this problem. The isolation between memory cells that magnetic control provides is provided through resistors in this new architecture. However, these resistors allow leakage current to flow which ultimately limits the size of the array due to power considerations. A kinetic inductance memory array will be limited to 4K bits with a read access time of 320 ps for a 1 um linewidth technology. If a power decoder could be developed, the memory architecture could serve as the blueprint for a fast (<1 ns), large scale (>1 Mbit) superconducting memory array.

  7. Communication Needs Assessment for Distributed Turbine Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Behbahani, Alireza R.

    2008-01-01

    Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.

  8. Experimental Investigation of Textile Composite Materials Using Moire Interferometry

    NASA Technical Reports Server (NTRS)

    Ifju, Peter G.

    1995-01-01

    The viability as an efficient aircraft material of advanced textile composites is currently being addressed in the NASA Advanced Composites Technology (ACT) Program. One of the expected milestones of the program is to develop standard test methods for these complex material systems. Current test methods for laminated composites may not be optimum for textile composites, since the architecture of the textile induces nonuniform deformation characteristics on the scale of the smallest repeating unit of the architecture. The smallest repeating unit, also called the unit cell, is often larger than the strain gages used for testing of tape composites. As a result, extending laminated composite test practices to textiles can often lead to pronounced scatter in material property measurements. It has been speculated that the fiber architectures produce significant surface strain nonuniformities, however, the magnitudes were not well understood. Moire interferometry, characterized by full-field information, high displacement sensitivity, and high spatial resolution, is well suited to document the surface strain on textile composites. Studies at the NASA Langley Research Center on a variety of textile architectures including 2-D braids and 3-D weaves, has evidenced the merits of using moire interferometry to guide in test method development for textile composites. Moire was used to support tensile testing by validating instrumentation practices and documenting damage mechanisms. It was used to validate shear test methods by mapping the full-field deformation of shear specimens. Moire was used to validate open hole tension experiments to determine the strain concentration and compare then to numeric predictions. It was used for through-the-thickness tensile strength test method development, to verify capabilities for testing of both 2-D and 3-D material systems. For all of these examples, moire interferometry provided vision so that test methods could be developed with less speculation and more documentation.

  9. 2006 : Wood Products Used in New Residential Construction U.S. and Canada, with Comparisons to 1995, 1998 and 2003 : Executive Summary

    Treesearch

    Craig Adair; David B. McKeever

    2009-01-01

    The construction of new single family, multifamily, and manufactured housing is an important market for wood products in both the United States and Canada. Annual wood products consumption is dependent on many factors, including the number of new units started, the size of units started, architectural characteristics, and consumer preferences. In 2006, about 39 percent...

  10. Effect of Teriparatide, Vibration and the Combination on Bone Mass and Bone Architecture in Chronic Spinal Cord Injury

    DTIC Science & Technology

    2015-12-01

    lateral condyles of the tibia and the anterioposterior axis was oriented orthogonally. The CT Hounsfield units were converted to calcium hydroxyapatite...orthogonally. The CT Hounsfield units were converted to calcium hydroxyapatite density rha using a linear relationship established with the phantom...concentration (QRM, Moehrendorf, Germany). The phantom allowed conversion of computed tomography Hounsfield units into hydroxyapatite equivalent density

  11. Observations of the Performance of the U.S. Laboratory Architecture

    NASA Technical Reports Server (NTRS)

    Jones, Rod

    2002-01-01

    The United States Laboratory Module "Destiny" was the product of many architectural, technology, manufacturing, schedule and cost constraints which spanned 15 years. Requirements for the Space Station pressurized elements were developed and baselined in the mid to late '80's. Although the station program went through several design changes the fundamental requirements that drove the architecture did not change. Manufacturing of the U.S. Laboratory began in the early 90's. Final assembly and checkout testing completed in December of 2000. Destiny was launched, mated to the International Space Station and successfully activated on the STS-98 mission in February of 2001. The purpose of this paper is to identify key requirements, which directly or indirectly established the architecture of the U.S. Laboratory. Provide an overview of how that architecture affected the manufacture, assembly, test, and activation of the module on-orbit. And finally, through observations made during the last year of operation, provide considerations in the development of future requirements and mission integration controls for space habitats.

  12. Bioinspired decision architectures containing host and microbiome processing units.

    PubMed

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  13. Les amas sulfurés du massif miocène d'El Aouana (Algérie)— I. Dynamisme de mise en place des roches volcaniques et implications métallogéniques

    NASA Astrophysics Data System (ADS)

    Villemaire, Cl.

    Two main units have been distinguished in the Miocene El Aouana area. A tectonic event occurs between their respective deposits inducing faulting, tilting of the lower volcanic unit and caldeira structure. The lower unit comprises first continental air fall pyroclastic rocks and dacitic flows, then marine flow pyroclastic rocks, dacitic flows and epiclastic rocks. The upper volcanic unit, announced by extensive andesitic flows, is characterized by pyroclastic flow sheets. The two units are intruded by dacitic domes. These volcanic rocks belong to the calco-alcaline succession, with well-expressed acidic terms. The ore deposits are formed by lenses, stockworks and lodes. They are massive sulphides ore type. Mineralizations are strictly localized at the contact boundary between dacitic intrusive rocks and marine pyroclastic flows and epiclastic rocks. We suggest that the systematic research of dacitic domes would be successful to increase the mining reserves of this area.

  14. 77 FR 36231 - Americans With Disabilities Act (ADA) and Architectural Barriers Act (ABA) Accessibility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Agency (FEMA) in the aftermath of Hurricanes Katrina and Rita raised issues regarding the application of... limitations. Emergency transportable housing units provided in the aftermath of Hurricanes Katrina and Rita... transportable housing units that were raised in the aftermath of Hurricanes Katrina and Rita. The committee...

  15. The Art of Opera.

    ERIC Educational Resources Information Center

    King, David L.

    1994-01-01

    Describes a two-week arts appreciation unit implemented by a sixth-grade teacher at Graham Elementary School in Los Angeles. The unit introduces the students to Parisian art and architecture, the music of Wagner and Stravinsky, and the paintings of Monet and Chagall. Visual and aural exposure to art and music, group discussions, and hands-on art…

  16. Learning Centers: A Report of the 1977 NEH Institute at Ohio State University.

    ERIC Educational Resources Information Center

    Allen, Edward D.

    1978-01-01

    A description of the twenty learning center units for advanced classes developed by the French and Spanish teacher-participants. Learning centers permit students to work independently at well-defined tasks. The units deal with housing, shopping, cooking, transportation, sports, fiestas, literature, history, architecture, painting, and music.…

  17. Thermal Management Architecture for Future Responsive Spacecraft

    NASA Astrophysics Data System (ADS)

    Bugby, D.; Zimbeck, W.; Kroliczek, E.

    2009-03-01

    This paper describes a novel thermal design architecture that enables satellites to be conceived, configured, launched, and operationally deployed very quickly. The architecture has been given the acronym SMARTS for Satellite Modular and Reconfigurable Thermal System and it involves four basic design rules: modest radiator oversizing, maximum external insulation, internal isothermalization and radiator heat flow modulation. The SMARTS philosophy is being developed in support of the DoD Operationally Responsive Space (ORS) initiative which seeks to drastically improve small satellite adaptability, deployability, and design flexibility. To illustrate the benefits of the philosophy for a prototypical multi-paneled small satellite, the paper describes a SMARTS thermal control system implementation that uses: panel-to-panel heat conduction, intra-panel heat pipe isothermalization, radiator heat flow modulation via a thermoelectric cooler (TEC) cold-biased loop heat pipe (LHP) and maximum external multi-layer insulation (MLI). Analyses are presented that compare the traditional "cold-biasing plus heater power" passive thermal design approach to the SMARTS approach. Plans for a 3-panel SMARTS thermal test bed are described. Ultimately, the goal is to incorporate SMARTS into the design of future ORS satellites, but it is also possible that some aspects of SMARTS technology could be used to improve the responsiveness of future NASA spacecraft. [22 CFR 125.4(b)(13) applicable

  18. Multi-dimensional knowledge translation: enabling health informatics capacity audits using patient journey models.

    PubMed

    Catley, Christina; McGregor, Carolyn; Percival, Jennifer; Curry, Joanne; James, Andrew

    2008-01-01

    This paper presents a multi-dimensional approach to knowledge translation, enabling results obtained from a survey evaluating the uptake of Information Technology within Neonatal Intensive Care Units to be translated into knowledge, in the form of health informatics capacity audits. Survey data, having multiple roles, patient care scenarios, levels, and hospitals, is translated using a structured data modeling approach, into patient journey models. The data model is defined such that users can develop queries to generate patient journey models based on a pre-defined Patient Journey Model architecture (PaJMa). PaJMa models are then analyzed to build capacity audits. Capacity audits offer a sophisticated view of health informatics usage, providing not only details of what IT solutions a hospital utilizes, but also answering the questions: when, how and why, by determining when the IT solutions are integrated into the patient journey, how they support the patient information flow, and why they improve the patient journey.

  19. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  20. Mesoporous titanium dioxide (TiO2) with hierarchically 3D dendrimeric architectures: formation mechanism and highly enhanced photocatalytic activity.

    PubMed

    Li, Xiao-Yun; Chen, Li-Hua; Rooke, Joanna Claire; Deng, Zhao; Hu, Zhi-Yi; Wang, Shao-Zhuan; Wang, Li; Li, Yu; Krief, Alain; Su, Bao-Lian

    2013-03-15

    Mesoporous TiO(2) with a hierarchically 3D dendrimeric nanostructure comprised of nanoribbon building units has been synthesized via a spontaneous self-formation process from various titanium alkoxides. These hierarchically 3D dendrimeric architectures can be obtained by a very facile, template-free method, by simply dropping a titanium butoxide precursor into methanol solution. The novel configuration of the mesoporous TiO(2) nanostructure in nanoribbon building units yields a high surface area. The calcined samples show significantly enhanced photocatalytic activity and degradation rates owing to the mesoporosity and their improved crystallinity after calcination. Furthermore, the 3D dendrimeric architectures can be preserved after phase transformation from amorphous TiO(2) to anatase or rutile, which occurs during calcination. In addition, the spontaneous self-formation process of mesoporous TiO(2) with hierarchically 3D dendrimeric architectures from the hydrolysis and condensation reaction of titanium butoxide in methanol has been followed by in situ optical microscopy (OM), revealing the secret on the formation of hierarchically 3D dendrimeric nanostructures. Moreover, mesoporous TiO(2) nanostructures with similar hierarchically 3D dendrimeric architectures can also be obtained using other titanium alkoxides. The porosities and nanostructures of the resultant products were characterized by SEM, TEM, XRD, and N(2) adsorption-desorption measurements. The present work provides a facile and reproducible method for the synthesis of novel mesoporous TiO(2) nanoarchitectures, which in turn could herald the fabrication of more efficient photocatalysts. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  2. Heterogeneous scalable framework for multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Karla Vanessa

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computermore » platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.« less

  3. Colorimetric chemical analysis sampler for the presence of explosives

    DOEpatents

    Nunes, Peter J [Danville, CA; Del Eckels, Joel [Livermore, CA; Reynolds, John G [San Ramon, CA; Pagoria, Philip F [Livermore, CA; Simpson, Randall L [Livermore, CA

    2011-09-27

    A tester for testing for explosives comprising a body, a lateral flow swab unit operably connected to the body, a explosives detecting reagent contained in the body, and a dispenser operatively connected to the body and the lateral flow swab unit. The dispenser selectively allows the explosives detecting reagent to be delivered to the lateral flow swab unit.

  4. Colorimetric chemical analysis sampler for the presence of explosives

    DOEpatents

    Nunes, Peter J.; Eckels, Joel Del; Reynolds, John G.; Pagoria, Philip F.; Simpson, Randall L.

    2014-07-01

    A tester for testing for explosives comprising a body, a lateral flow swab unit operably connected to the body, a explosives detecting reagent contained in the body, and a dispenser operatively connected to the body and the lateral flow swab unit. The dispenser selectively allows the explosives detecting reagent to be delivered to the lateral flow swab unit.

  5. Rule-based graph theory to enable exploration of the space system architecture design space

    NASA Astrophysics Data System (ADS)

    Arney, Dale Curtis

    The primary goal of this research is to improve upon system architecture modeling in order to enable the exploration of design space options. A system architecture is the description of the functional and physical allocation of elements and the relationships, interactions, and interfaces between those elements necessary to satisfy a set of constraints and requirements. The functional allocation defines the functions that each system (element) performs, and the physical allocation defines the systems required to meet those functions. Trading the functionality between systems leads to the architecture-level design space that is available to the system architect. The research presents a methodology that enables the modeling of complex space system architectures using a mathematical framework. To accomplish the goal of improved architecture modeling, the framework meets five goals: technical credibility, adaptability, flexibility, intuitiveness, and exhaustiveness. The framework is technically credible, in that it produces an accurate and complete representation of the system architecture under consideration. The framework is adaptable, in that it provides the ability to create user-specified locations, steady states, and functions. The framework is flexible, in that it allows the user to model system architectures to multiple destinations without changing the underlying framework. The framework is intuitive for user input while still creating a comprehensive mathematical representation that maintains the necessary information to completely model complex system architectures. Finally, the framework is exhaustive, in that it provides the ability to explore the entire system architecture design space. After an extensive search of the literature, graph theory presents a valuable mechanism for representing the flow of information or vehicles within a simple mathematical framework. Graph theory has been used in developing mathematical models of many transportation and network flow problems in the past, where nodes represent physical locations and edges represent the means by which information or vehicles travel between those locations. In space system architecting, expressing the physical locations (low-Earth orbit, low-lunar orbit, etc.) and steady states (interplanetary trajectory) as nodes and the different means of moving between the nodes (propulsive maneuvers, etc.) as edges formulates a mathematical representation of this design space. The selection of a given system architecture using graph theory entails defining the paths that the systems take through the space system architecture graph. A path through the graph is defined as a list of edges that are traversed, which in turn defines functions performed by the system. A structure to compactly represent this information is a matrix, called the system map, in which the column indices are associated with the systems that exist and row indices are associated with the edges, or functions, to which each system has access. Several contributions have been added to the state of the art in space system architecture analysis. The framework adds the capability to rapidly explore the design space without the need to limit trade options or the need for user interaction during the exploration process. The unique mathematical representation of a system architecture, through the use of the adjacency, incidence, and system map matrices, enables automated design space exploration using stochastic optimization processes. The innovative rule-based graph traversal algorithm ensures functional feasibility of each system architecture that is analyzed, and the automatic generation of the system hierarchy eliminates the need for the user to manually determine the relationships between systems during or before the design space exploration process. Finally, the rapid evaluation of system architectures for various mission types enables analysis of the system architecture design space for multiple destinations within an evolutionary exploration program. (Abstract shortened by UMI.).

  6. Hybrid parallel computing architecture for multiview phase shifting

    NASA Astrophysics Data System (ADS)

    Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun

    2014-11-01

    The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.

  7. An artificial elementary eye with optic flow detection and compositional properties.

    PubMed

    Pericet-Camara, Ramon; Dobrzynski, Michal K; Juston, Raphaël; Viollet, Stéphane; Leitel, Robert; Mallot, Hanspeter A; Floreano, Dario

    2015-08-06

    We describe a 2 mg artificial elementary eye whose structure and functionality is inspired by compound eye ommatidia. Its optical sensitivity and electronic architecture are sufficient to generate the required signals for the measurement of local optic flow vectors in multiple directions. Multiple elementary eyes can be assembled to create a compound vision system of desired shape and curvature spanning large fields of view. The system configurability is validated with the fabrication of a flexible linear array of artificial elementary eyes capable of extracting optic flow over multiple visual directions. © 2015 The Author(s).

  8. PERCLOS: A Valid Psychophysiological Measure of Alertness As Assessed by Psychomotor Vigilance

    DOT National Transportation Integrated Search

    2002-04-01

    The Logical Architecture is based on a Computer Aided Systems Engineering (CASE) model of the requirements for the flow of data and control through the various functions included in Intelligent Transportation Systems (ITS). Process Specifications pro...

  9. Mean and turbulent flow statistics in a trellised agricultural canopy

    USDA-ARS?s Scientific Manuscript database

    The architecture of a trellised agricultural canopy presents many similarities to homogeneous plant canopies, windbreaks, and urban canopies including street canyons. Compared to these other canopies, trellised canopies (e.g. vineyard) present an interesting, complex, two-dimensional environment tha...

  10. Dual-mode ultraflow access networks: a hybrid solution for the access bottleneck

    NASA Astrophysics Data System (ADS)

    Kazovsky, Leonid G.; Shen, Thomas Shunrong; Dhaini, Ahmad R.; Yin, Shuang; De Leenheer, Marc; Detwiler, Benjamin A.

    2013-12-01

    Optical Flow Switching (OFS) is a promising solution for large Internet data transfers. In this paper, we introduce UltraFlow Access, a novel optical access network architecture that offers dual-mode service to its end-users: IP and OFS. With UltraFlow Access, we design and implement a new dual-mode control plane and a new dual-mode network stack to ensure efficient connection setup and reliable and optimal data transmission. We study the impact of the UltraFlow system's design on the network throughput. Our experimental results show that with an optimized system design, near optimal (around 10 Gb/s) OFS data throughput can be attained when the line rate is 10Gb/s.

  11. Transition of a Three-Dimensional Unsteady Viscous Flow Analysis from a Research Environment to the Design Environment

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne; Dorney, Daniel J.; Huber, Frank; Sheffler, David A.; Turner, James E. (Technical Monitor)

    2001-01-01

    The advent of advanced computer architectures and parallel computing have led to a revolutionary change in the design process for turbomachinery components. Two- and three-dimensional steady-state computational flow procedures are now routinely used in the early stages of design. Unsteady flow analyses, however, are just beginning to be incorporated into design systems. This paper outlines the transition of a three-dimensional unsteady viscous flow analysis from the research environment into the design environment. The test case used to demonstrate the analysis is the full turbine system (high-pressure turbine, inter-turbine duct and low-pressure turbine) from an advanced turboprop engine.

  12. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  13. A limit-cycle self-organizing map architecture for stable arm control.

    PubMed

    Huang, Di-Wei; Gentili, Rodolphe J; Katz, Garrett E; Reggia, James A

    2017-01-01

    Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Biophysical controls on cluster dynamics and architectural differentiation of microbial biofilms in contrasting flow environments

    PubMed Central

    Hödl, Iris; Mari, Lorenzo; Bertuzzo, Enrico; Suweis, Samir; Besemer, Katharina; Rinaldo, Andrea; Battin, Tom J

    2014-01-01

    Ecology, with a traditional focus on plants and animals, seeks to understand the mechanisms underlying structure and dynamics of communities. In microbial ecology, the focus is changing from planktonic communities to attached biofilms that dominate microbial life in numerous systems. Therefore, interest in the structure and function of biofilms is on the rise. Biofilms can form reproducible physical structures (i.e. architecture) at the millimetre-scale, which are central to their functioning. However, the spatial dynamics of the clusters conferring physical structure to biofilms remains often elusive. By experimenting with complex microbial communities forming biofilms in contrasting hydrodynamic microenvironments in stream mesocosms, we show that morphogenesis results in ‘ripple-like’ and ‘star-like’ architectures – as they have also been reported from monospecies bacterial biofilms, for instance. To explore the potential contribution of demographic processes to these architectures, we propose a size-structured population model to simulate the dynamics of biofilm growth and cluster size distribution. Our findings establish that basic physical and demographic processes are key forces that shape apparently universal biofilm architectures as they occur in diverse microbial but also in single-species bacterial biofilms. PMID:23879839

  15. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  16. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  17. The detailed 3D multi-loop aggregate/rosette chromatin architecture and functional dynamic organization of the human and mouse genomes.

    PubMed

    Knoch, Tobias A; Wachsmuth, Malte; Kepper, Nick; Lesnussa, Michael; Abuseiris, Anis; Ali Imam, A M; Kolovos, Petros; Zuin, Jessica; Kockx, Christel E M; Brouwer, Rutger W W; van de Werken, Harmen J G; van IJcken, Wilfred F J; Wendt, Kerstin S; Grosveld, Frank G

    2016-01-01

    The dynamic three-dimensional chromatin architecture of genomes and its co-evolutionary connection to its function-the storage, expression, and replication of genetic information-is still one of the central issues in biology. Here, we describe the much debated 3D architecture of the human and mouse genomes from the nucleosomal to the megabase pair level by a novel approach combining selective high-throughput high-resolution chromosomal interaction capture ( T2C ), polymer simulations, and scaling analysis of the 3D architecture and the DNA sequence. The genome is compacted into a chromatin quasi-fibre with ~5 ± 1 nucleosomes/11 nm, folded into stable ~30-100 kbp loops forming stable loop aggregates/rosettes connected by similar sized linkers. Minor but significant variations in the architecture are seen between cell types and functional states. The architecture and the DNA sequence show very similar fine-structured multi-scaling behaviour confirming their co-evolution and the above. This architecture, its dynamics, and accessibility, balance stability and flexibility ensuring genome integrity and variation enabling gene expression/regulation by self-organization of (in)active units already in proximity. Our results agree with the heuristics of the field and allow "architectural sequencing" at a genome mechanics level to understand the inseparable systems genomic properties.

  18. Applying Dataflow Architecture and Visualization Tools to In Vitro Pharmacology Data Automation.

    PubMed

    Pechter, David; Xu, Serena; Kurtz, Marc; Williams, Steven; Sonatore, Lisa; Villafania, Artjohn; Agrawal, Sony

    2016-12-01

    The pace and complexity of modern drug discovery places ever-increasing demands on scientists for data analysis and interpretation. Data flow programming and modern visualization tools address these demands directly. Three different requirements-one for allosteric modulator analysis, one for a specialized clotting analysis, and one for enzyme global progress curve analysis-are reviewed, and their execution in a combined data flow/visualization environment is outlined. © 2016 Society for Laboratory Automation and Screening.

  19. Driving Parameters for Distributed and Centralized Air Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Feron, Eric

    2001-01-01

    This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.

  20. Blood and interstitial flow in the hierarchical pore space architecture of bone tissue.

    PubMed

    Cowin, Stephen C; Cardoso, Luis

    2015-03-18

    There are two main types of fluid in bone tissue, blood and interstitial fluid. The chemical composition of these fluids varies with time and location in bone. Blood arrives through the arterial system containing oxygen and other nutrients and the blood components depart via the venous system containing less oxygen and reduced nutrition. Within the bone, as within other tissues, substances pass from the blood through the arterial walls into the interstitial fluid. The movement of the interstitial fluid carries these substances to the cells within the bone and, at the same time, carries off the waste materials from the cells. Bone tissue would not live without these fluid movements. The development of a model for poroelastic materials with hierarchical pore space architecture for the description of blood flow and interstitial fluid flow in living bone tissue is reviewed. The model is applied to the problem of determining the exchange of pore fluid between the vascular porosity and the lacunar-canalicular porosity in bone tissue due to cyclic mechanical loading and blood pressure. These results are basic to the understanding of interstitial flow in bone tissue that, in turn, is basic to understanding of nutrient transport from the vasculature to the bone cells buried in the bone tissue and to the process of mechanotransduction by these cells. Copyright © 2014 Elsevier Ltd. All rights reserved.

Top