A Fast Event Preprocessor and Sequencer for the Simbol-X Low Energy Detector
NASA Astrophysics Data System (ADS)
Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.
2009-05-01
The Simbol-X Low Energy Detector (LED), a 128×128 pixel DEPFET (Depleted Field Effect Transistor) array, will be read out at a very high rate (8000 frames/second) and, therefore, requires a very fast on board electronics. We present an FPGA-based LED camera electronics consisting of an Event Preprocessor (EPP) for on board data preprocessing and filtering of the Simbol-X low-energy detector and a related Sequencer (SEQ) to generate the necessary signals to control the readout.
Application of Taylor's series to trajectory propagation
NASA Technical Reports Server (NTRS)
Stanford, R. H.; Berryman, K. W.; Breckheimer, P. J.
1986-01-01
This paper describes the propagation of trajectories by the application of the preprocessor ATOMCC which uses Taylor's series to solve initial value problems in ordinary differential equations. Comparison of the results obtained with those from other methods are presented. The current studies indicate that the ATOMCC preprocessor is an easy, yet fast and accurate method for generating trajectories.
A fast event preprocessor for the Simbol-X Low-Energy Detector
NASA Astrophysics Data System (ADS)
Schanz, T.; Tenzer, C.; Kendziorra, E.; Santangelo, A.
2008-07-01
The Simbol-X1 Low Energy Detector (LED), a 128 × 128 pixel DEPFET array, will be read out very fast (8000 frames/second). This requires a very fast onboard data preprocessing of the raw data. We present an FPGA based Event Preprocessor (EPP) which can fulfill this requirements. The design is developed in the hardware description language VHDL and can be later ported on an ASIC technology. The EPP performs a pixel related offset correction and can apply different energy thresholds to each pixel of the frame. It also provides a line related common-mode correction to reduce noise that is unavoidably caused by the analog readout chip of the DEPFET. An integrated pattern detector can block all invalid pixel patterns. The EPP has an internal pipeline structure and can perform all operation in realtime (< 2 μs per line of 64 pixel) with a base clock frequency of 100 MHz. It is utilizing a fast median-value detection algorithm for common-mode correction and a new pattern scanning algorithm to select only valid events. Both new algorithms were developed during the last year at our institute.
A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector
NASA Astrophysics Data System (ADS)
Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.
2010-12-01
We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64×64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).
2016-01-22
Numerical electromagnetic simulations based on the multilevel fast multipole method (MLFMM) were used to analyze and optimize the antenna...and are not necessarily endorsed by the United States Government. numerical simulations with the multilevel fast multipole method (MLFMM...and optimized using numerical simulations conducted with the multilevel fast multipole method (MLFMM) using FEKO software (www.feko.info). The
Molecular contamination math model support
NASA Technical Reports Server (NTRS)
Wells, R.
1983-01-01
The operation and features of a preprocessor for the Shuttle/Payload Contamination Evaluation Program Version 2) are described. A preliminary preprocessor for SPACE 2 is developed. Further refinements and enhancements of the preprocessor to insure complete user friendly operation, are recommended.
NASA Technical Reports Server (NTRS)
Shyam, Vikram
2010-01-01
A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.
NASTRAN pre and postprocessors using low-cost interactive graphics
NASA Technical Reports Server (NTRS)
Herness, E. D.; Kriloff, H. Z.
1975-01-01
A design for a NASTRAN preprocessor is given to illustrate a typical preprocessor. Several displays of NASTRAN models illustrate the preprocessor's capabilities. A design of a NASTRAN postprocessor is presented along with an example of displays generated by that NASTRAN processor.
NASA Technical Reports Server (NTRS)
Christenson, D.; Gordon, M.; Kistler, R.; Kriegler, F.; Lampert, S.; Marshall, R.; Mclaughlin, R.
1977-01-01
A third-generation, fast, low cost, multispectral recognition system (MIDAS) able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensots is described. The program can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principle objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in the overall program is described. The system contains a midi-computer to control the various high speed processing elements in the data path, a preprocessor to condition data, and a classifier which implements an all digital prototype multivariate Gaussian maximum likelihood or a Bayesian decision algorithm. Sufficient software was developed to perform signature extraction, control the preprocessor, compute classifier coefficients, control the classifier operation, operate the color display and printer, and diagnose operation.
An Interactive Preprocessor Program with Graphics for a Three-Dimensional Finite Element Code.
ERIC Educational Resources Information Center
Hamilton, Claude Hayden, III
The development and capabilities of an interactive preprocessor program with graphics for an existing three-dimensional finite element code is presented. This preprocessor program, EDGAP3D, is designed to be used in conjunction with the Texas Three Dimensional Grain Analysis Program (TXCAP3D). The code presented in this research is capable of the…
ERIC Educational Resources Information Center
Mitchell, Ron; Conner Michael
A brief description of the Coursewriter II preprocessor is provided. This preprocessor, a program written in FORTRAN IV on the CDC 6600 computer, is designed to reduce the repetition of effort that takes place from the time of the author's conception of a course to the time of its availability for on-line student instruction. The programer deals…
Automatic Dynamic Aircraft Modeler (ADAM) for the Computer Program NASTRAN
NASA Technical Reports Server (NTRS)
Griffis, H.
1985-01-01
Large general purpose finite element programs require users to develop large quantities of input data. General purpose pre-processors are used to decrease the effort required to develop structural models. Further reduction of effort can be achieved by specific application pre-processors. Automatic Dynamic Aircraft Modeler (ADAM) is one such application specific pre-processor. General purpose pre-processors use points, lines and surfaces to describe geometric shapes. Specifying that ADAM is used only for aircraft structures allows generic structural sections, wing boxes and bodies, to be pre-defined. Hence with only gross dimensions, thicknesses, material properties and pre-defined boundary conditions a complete model of an aircraft can be created.
The ATLAS Level-1 Calorimeter Trigger: PreProcessor implementation and performance
NASA Astrophysics Data System (ADS)
Åsman, B.; Achenbach, R.; Allbrooke, B. M. M.; Anders, G.; Andrei, V.; Büscher, V.; Bansil, H. S.; Barnett, B. M.; Bauss, B.; Bendtz, K.; Bohm, C.; Bracinik, J.; Brawn, I. P.; Brock, R.; Buttinger, W.; Caputo, R.; Caughron, S.; Cerrito, L.; Charlton, D. G.; Childers, J. T.; Curtis, C. J.; Daniells, A. C.; Davis, A. O.; Davygora, Y.; Dorn, M.; Eckweiler, S.; Edmunds, D.; Edwards, J. P.; Eisenhandler, E.; Ellis, K.; Ermoline, Y.; Föhlisch, F.; Faulkner, P. J. W.; Fedorko, W.; Fleckner, J.; French, S. T.; Gee, C. N. P.; Gillman, A. R.; Goeringer, C.; Hülsing, T.; Hadley, D. R.; Hanke, P.; Hauser, R.; Heim, S.; Hellman, S.; Hickling, R. S.; Hidvégi, A.; Hillier, S. J.; Hofmann, J. I.; Hristova, I.; Ji, W.; Johansen, M.; Keller, M.; Khomich, A.; Kluge, E.-E.; Koll, J.; Laier, H.; Landon, M. P. J.; Lang, V. S.; Laurens, P.; Lepold, F.; Lilley, J. N.; Linnemann, J. T.; Müller, F.; Müller, T.; Mahboubi, K.; Martin, T. A.; Mass, A.; Meier, K.; Meyer, C.; Middleton, R. P.; Moa, T.; Moritz, S.; Morris, J. D.; Mudd, R. D.; Narayan, R.; zur Nedden, M.; Neusiedl, A.; Newman, P. R.; Nikiforov, A.; Ohm, C. C.; Perera, V. J. O.; Pfeiffer, U.; Plucinski, P.; Poddar, S.; Prieur, D. P. F.; Qian, W.; Rieck, P.; Rizvi, E.; Sankey, D. P. C.; Schäfer, U.; Scharf, V.; Schmitt, K.; Schröder, C.; Schultz-Coulon, H.-C.; Schumacher, C.; Schwienhorst, R.; Silverstein, S. B.; Simioni, E.; Snidero, G.; Staley, R. J.; Stamen, R.; Stock, P.; Stockton, M. C.; Tan, C. L. A.; Tapprogge, S.; Thomas, J. P.; Thompson, P. D.; Thomson, M.; True, P.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Weber, P.; Wessels, M.; Wiglesworth, C.; Williams, S. L.
2012-12-01
The PreProcessor system of the ATLAS Level-1 Calorimeter Trigger (L1Calo) receives about 7200 analogue signals from the electromagnetic and hadronic components of the calorimetric detector system. Lateral division results in cells which are pre-summed to so-called Trigger Towers of size 0.1 × 0.1 along azimuth (phi) and pseudorapidity (η). The received calorimeter signals represent deposits of transverse energy. The system consists of 124 individual PreProcessor modules that digitise the input signals for each LHC collision, and provide energy and timing information to the digital processors of the L1Calo system, which identify physics objects forming much of the basis for the full ATLAS first level trigger decision. This paper describes the architecture of the PreProcessor, its hardware realisation, functionality, and performance.
Fixed site neutralization model programmer's manual. Volume II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engi, D.; Chapman, L.D.; Judnick, W.
This report relates to protection of nuclear materials at nuclear facilities. This volume presents the source listings for the Fixed Site Neutralization Model and its supporting modules, the Plex Preprocessor and the Data Preprocessor. (DLC)
A preprocessor for the Urbana coherent-scatter radar
NASA Technical Reports Server (NTRS)
Zendt, F. T.; Bowhill, S. A.
1982-01-01
The design, interfacing, testing, and operation of a preprocessor to increase the altitude and temporal resolution of the present coherent-scatter system are described. This system upgrade requires an increase in the data collection rate. Replacing the present, relatively slow, ADC with two high speed ADCs achieves the increased echo sampling rate desired. To stay within the capabilities of the main computer's I/O and processing rate the data must be reduced before transfer to the main computer. Thus the preprocessor also coherently integrates the data before transfer.
NASA Astrophysics Data System (ADS)
Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo
2017-06-01
The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
JFLIP-JPL FORTRAN language with interval pre-processor
NASA Technical Reports Server (NTRS)
Germann, D. A.; Knowlton, P. H.; Smith, H. L.
1969-01-01
FLIP and TMG are a FORTRAN pre-processor and a Syntax-Directed-Compiler used to describe the language in which the former is written. They provide those who write in FORTRAN 4 with greater language flexibility and power.
Real-time multi-DSP control of three-phase current-source unity power factor PWM rectifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao Wang; Boon-Teck Ooi
1993-07-01
The design of a real-time multi-DSP controller for a high-quality six-valve three-phase current-source unity power factor PWM rectifier is discussed in this paper. With the decoupler preprocessor and the dynamic trilogic PWM trigger scheme, each of the three input currents can be controlled independently. Based on the a-b-c frame system model and the fast parallel computer control, the pole-placement control method is implemented successfully to achieve fast response in the ac currents. The low-frequency resonance in the ac filter L-C networks has been damped effectively. The experimental results are obtained from a 1-kVA bipolar transistor current-source PWM rectifier with amore » real-time controller using three TMS320C25 DSP's.« less
CTF Preprocessor User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria; Salko, Robert K.
2016-05-26
This document describes how a user should go about using the CTF pre- processor tool to create an input deck for modeling rod-bundle geometry in CTF. The tool was designed to generate input decks in a quick and less error-prone manner for CTF. The pre-processor is a completely independent utility, written in Fortran, that takes a reduced amount of input from the user. The information that the user must supply is basic information on bundle geometry, such as rod pitch, clad thickness, and axial location of spacer grids--the pre-processor takes this basic information and determines channel placement and connection informationmore » to be written to the input deck, which is the most time-consuming and error-prone segment of creating a deck. Creation of the model is also more intuitive, as the user can specify assembly and water-tube placement using visual maps instead of having to place them by determining channel/channel and rod/channel connections. As an example of the benefit of the pre-processor, a quarter-core model that contains 500,000 scalar-mesh cells was read into CTF from an input deck containing 200,000 lines of data. This 200,000 line input deck was produced automatically from a set of pre-processor decks that contained only 300 lines of data.« less
A computer controlled signal preprocessor for laser fringe anemometer applications
NASA Technical Reports Server (NTRS)
Oberle, Lawrence G.
1987-01-01
The operation of most commercially available laser fringe anemometer (LFA) counter-processors assumes that adjustments are made to the signal processing independent of the computer used for reducing the data acquired. Not only does the researcher desire a record of these parameters attached to the data acquired, but changes in flow conditions generally require that these settings be changed to improve data quality. Because of this limitation, on-line modification of the data acquisition parameters can be difficult and time consuming. A computer-controlled signal preprocessor has been developed which makes possible this optimization of the photomultiplier signal as a normal part of the data acquisition process. It allows computer control of the filter selection, signal gain, and photo-multiplier voltage. The raw signal from the photomultiplier tube is input to the preprocessor which, under the control of a digital computer, filters the signal and amplifies it to an acceptable level. The counter-processor used at Lewis Research Center generates the particle interarrival times, as well as the time-of-flight of the particle through the probe volume. The signal preprocessor allows computer control of the acquisition of these data.Through the preprocessor, the computer also can control the hand shaking signals for the interface between itself and the counter-processor. Finally, the signal preprocessor splits the pedestal from the signal before filtering, and monitors the photo-multiplier dc current, sends a signal proportional to this current to the computer through an analog to digital converter, and provides an alarm if the current exceeds a predefined maximum. Complete drawings and explanations are provided in the text as well as a sample interface program for use with the data acquisition software.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Fast food purchasing and access to fast food restaurants: a multilevel analysis of VicLANES.
Thornton, Lukar E; Bentley, Rebecca J; Kavanagh, Anne M
2009-05-27
While previous research on fast food access and purchasing has not found evidence of an association, these studies have had methodological problems including aggregation error, lack of specificity between the exposures and outcomes, and lack of adjustment for potential confounding. In this paper we attempt to address these methodological problems using data from the Victorian Lifestyle and Neighbourhood Environments Study (VicLANES) - a cross-sectional multilevel study conducted within metropolitan Melbourne, Australia in 2003. The VicLANES data used in this analysis included 2547 participants from 49 census collector districts in metropolitan Melbourne, Australia. The outcome of interest was the total frequency of fast food purchased for consumption at home within the previous month (never, monthly and weekly) from five major fast food chains (Red Rooster, McDonalds, Kentucky Fried Chicken, Hungry Jacks and Pizza Hut). Three measures of fast food access were created: density and variety, defined as the number of fast food restaurants and the number of different fast food chains within 3 kilometres of road network distance respectively, and proximity defined as the road network distance to the closest fast food restaurant.Multilevel multinomial models were used to estimate the associations between fast food restaurant access and purchasing with never purchased as the reference category. Models were adjusted for confounders including determinants of demand (attitudes and tastes that influence food purchasing decisions) as well as individual and area socio-economic characteristics. Purchasing fast food on a monthly basis was related to the variety of fast food restaurants (odds ratio 1.13; 95% confidence interval 1.02 - 1.25) after adjusting for individual and area characteristics. Density and proximity were not found to be significant predictors of fast food purchasing after adjustment for individual socio-economic predictors. Although we found an independent association between fast food purchasing and access to a wider variety of fast food restaurant, density and proximity were not significant predictors. The methods used in our study are an advance on previous analyses.
Chung, King; Nelson, Lance; Teske, Melissa
2012-09-01
The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessor-eProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0° to 360°. Listeners' speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI + NR), and adaptive directional microphone plus noise reduction algorithm (ADM + NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADM+NR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Gordon, M. F.; Mclaughlin, R. H.; Marshall, R. E.
1975-01-01
The MIDAS (Multivariate Interactive Digital Analysis System) processor is a high-speed processor designed to process multispectral scanner data (from Landsat, EOS, aircraft, etc.) quickly and cost-effectively to meet the requirements of users of remote sensor data, especially from very large areas. MIDAS consists of a fast multipipeline preprocessor and classifier, an interactive color display and color printer, and a medium scale computer system for analysis and control. The system is designed to process data having as many as 16 spectral bands per picture element at rates of 200,000 picture elements per second into as many as 17 classes using a maximum likelihood decision rule.
PREMAQ: A NEW PRE-PROCESSOR TO CMAQ FOR AIR-QUALITY FORECASTING
A new pre-processor to CMAQ (PREMAQ) has been developed as part of the national air-quality forecasting system. PREMAQ combines the functionality of MCIP and parts of SMOKE in a single real-time processor. PREMAQ was specifically designed to link NCEP's Eta model with CMAQ, and...
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.
Paul, Debjani; Saias, Laure; Pedinotti, Jean-Cedric; Chabert, Max; Magnifico, Sebastien; Pallandre, Antoine; De Lambert, Bertrand; Houdayer, Claude; Brugg, Bernard; Peyrin, Jean-Michel; Viovy, Jean-Louis
2011-01-01
A broad range of microfluidic applications, ranging from cell culture to protein crystallization, requires multilevel devices with different heights and feature sizes (from micrometers to millimeters). While state-of-the-art direct-writing techniques have been developed for creating complex three-dimensional shapes, replication molding from a multilevel template is still the preferred method for fast prototyping of microfluidic devices in the laboratory. Here, we report on a “dry and wet hybrid” technique to fabricate multilevel replication molds by combining SU-8 lithography with a dry film resist (Ordyl). We show that the two lithography protocols are chemically compatible with each other. Finally, we demonstrate the hybrid technique in two different microfluidic applications: (1) a neuron culture device with compartmentalization of different elements of a neuron and (2) a two-phase (gas-liquid) global micromixer for fast mixing of a small amount of a viscous liquid into a larger volume of a less viscous liquid. PMID:21559239
Fast food purchasing and access to fast food restaurants: a multilevel analysis of VicLANES
Thornton, Lukar E; Bentley, Rebecca J; Kavanagh, Anne M
2009-01-01
Background While previous research on fast food access and purchasing has not found evidence of an association, these studies have had methodological problems including aggregation error, lack of specificity between the exposures and outcomes, and lack of adjustment for potential confounding. In this paper we attempt to address these methodological problems using data from the Victorian Lifestyle and Neighbourhood Environments Study (VicLANES) – a cross-sectional multilevel study conducted within metropolitan Melbourne, Australia in 2003. Methods The VicLANES data used in this analysis included 2547 participants from 49 census collector districts in metropolitan Melbourne, Australia. The outcome of interest was the total frequency of fast food purchased for consumption at home within the previous month (never, monthly and weekly) from five major fast food chains (Red Rooster, McDonalds, Kentucky Fried Chicken, Hungry Jacks and Pizza Hut). Three measures of fast food access were created: density and variety, defined as the number of fast food restaurants and the number of different fast food chains within 3 kilometres of road network distance respectively, and proximity defined as the road network distance to the closest fast food restaurant. Multilevel multinomial models were used to estimate the associations between fast food restaurant access and purchasing with never purchased as the reference category. Models were adjusted for confounders including determinants of demand (attitudes and tastes that influence food purchasing decisions) as well as individual and area socio-economic characteristics. Results Purchasing fast food on a monthly basis was related to the variety of fast food restaurants (odds ratio 1.13; 95% confidence interval 1.02 – 1.25) after adjusting for individual and area characteristics. Density and proximity were not found to be significant predictors of fast food purchasing after adjustment for individual socio-economic predictors. Conclusion Although we found an independent association between fast food purchasing and access to a wider variety of fast food restaurant, density and proximity were not significant predictors. The methods used in our study are an advance on previous analyses. PMID:19473503
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizell, D.; Carter, S.
In 1987, ISI's parallel distributed computing research group implemented a prototype sequential simulation system, designed for high-level simulation of candidate (Strategic Defense Initiative) architectures. A main design goal was to produce a simulation system that could incorporate non-trivial, executable representations of battle-management computations on each platform that were capable of controlling the actions of that platform throughout the simulation. The term BMA (battle manager abstraction) was used to refer to these simulated battle-management computations. In the authors first version of the simulator, the BMAs were C++ programs that we wrote and manually inserted into the system. Since then, they havemore » designed and implemented KMAC, a high-level language for writing BMA's. The KMAC preprocessor, built using the Unix tools lex 2 and YACC 3, translates KMAC source programs into C++ programs and passes them on to the C++ compiler. The KMAC preprocessor was incorporated into and operates under the control of the simulator's interactive user interface. After the KMAC preprocessor has translated a program into C++, the user interface system invokes the C++ compiler, and incorporates the resulting object code into the simulator load module for execution as part of a simulation run. This report describes the KMAC language and its preprocessor. Section 2 provides background material on the design of the simulation system that is necessary for understanding some of the parts of KMAC and some of the reasons it is structured the way it is. Section 3 describes the syntax and semantics of the language, and Section 4 discusses design of the preprocessor.« less
Shift and Scale Invariant Preprocessor.
1981-12-01
perception. Although many transducers are available for converting ligbt, sound , temperature, reflected radar signals, etc., to electrical signals, the...the required tests, introduce the testing approach, and finally intepret the results. The functional block diagram of the preprocessor, figure 5, is 43...position with respect to the observation field Is irmaterial. In theory the principle is sound , but some practical limitations may degrade predicted
MAGI: a Node.js web service for fast microRNA-Seq analysis in a GPU infrastructure.
Kim, Jihoon; Levy, Eric; Ferbrache, Alex; Stepanowsky, Petra; Farcas, Claudiu; Wang, Shuang; Brunner, Stefan; Bath, Tyler; Wu, Yuan; Ohno-Machado, Lucila
2014-10-01
MAGI is a web service for fast MicroRNA-Seq data analysis in a graphics processing unit (GPU) infrastructure. Using just a browser, users have access to results as web reports in just a few hours->600% end-to-end performance improvement over state of the art. MAGI's salient features are (i) transfer of large input files in native FASTA with Qualities (FASTQ) format through drag-and-drop operations, (ii) rapid prediction of microRNA target genes leveraging parallel computing with GPU devices, (iii) all-in-one analytics with novel feature extraction, statistical test for differential expression and diagnostic plot generation for quality control and (iv) interactive visualization and exploration of results in web reports that are readily available for publication. MAGI relies on the Node.js JavaScript framework, along with NVIDIA CUDA C, PHP: Hypertext Preprocessor (PHP), Perl and R. It is freely available at http://magi.ucsd.edu. © The Author 2014. Published by Oxford University Press.
Integrating a Natural Language Message Pre-Processor with UIMA
2008-01-01
Carnegie Mellon Language Technologies Institute NL Message Preprocessing with UIMA Copyright © 2008, Carnegie Mellon. All Rights Reserved...Integrating a Natural Language Message Pre-Processor with UIMA Eric Nyberg, Eric Riebling, Richard C. Wang & Robert Frederking Language Technologies Institute...with UIMA 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER
1980-06-01
34 LIST OF ILLUSTRATIONS FIGURE PAGE 1 Block Diagram of DLMS Voice Recognition System .............. S 2 Flowchart of DefaulV...particular are a speech preprocessor and a minicomputer. In the VRS, as shown in the block diagram of Fig. 1, the preprocessor is a TTI model 8040 and...Data General 6026 Magnetic Zo 4 Tape Unit Display L-- - Equipment Cabinet Fig. 1 block Diagram of DIMS Voice Recognition System qS 2. Flexible Disk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Preprocessor and postprocessor computer programs for a radial-flow finite-element model
Pucci, A.A.; Pope, D.A.
1987-01-01
Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)
MAGNA (Materially and Geometrically Nonlinear Analysis). Part II. Preprocessor Manual.
1982-12-01
AGRID can accept a virtually arbitrary collection of point coor- dinates which lie on a surface of interest, and generate a regular grid of mesh points...in the form of a collection of such patches to be translated into an assemblage of biquadratic surface elements (see Subsection 2.1, Figure 2.2...using IMPRESS can be converted for use with the present preprocessor by means of the IMPRINT translator. IMPRINT is a collection of conversion routines
Assimilation of GMS-5 satellite winds using nudging method with MM5
NASA Astrophysics Data System (ADS)
Gao, Shanhong; Wu, Zengmao; Yang, Bo
2006-09-01
With the aid of Meteorological Information Composite and Processing System (MICAPS), satellite wind vectors derived from the Geostationary Meteorological Statellite-5 (GMS-5) and retrieved by National Satellite Meteorology Center of China (NSMC) can be obtained. Based on the nudging method built in the fifth-generation Mesoscale Model (MM5) of Pennsylvania State University and National Center for Atmospheric Research, a data preprocessor is developed to convert these satellite wind vectors to those with specified format required in MM5. To examine the data preprocessor and evaluate the impact of satellite winds from GMS-5 on MM5 simulations, a series of numerical experimental forecasts consisting of four typhoon cases in 2002 are designed and implemented. The results show that the preprocessor can process satellite winds smoothly and MM5 model runs successfully with a little extra computational load during ingesting these winds, and that assimilation of satellite winds by MM5 nudging method can obviously improve typhoon track forecast but contributes a little to typhoon intensity forecast. The impact of the satellite winds depends heavily upon whether the typhoon bogussing scheme in MM5 was turned on or not. The data preprocessor developed in this paper not only can treat GMS-5 satellite winds but also has capability with little modification to process derived winds from other geostationary satellites.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
Individual and area-level socioeconomic associations with fast food purchasing.
Thornton, Lukar E; Bentley, Rebecca J; Kavanagh, Anne M
2011-10-01
It has been suggested that those with lower socioeconomic characteristics would be more likely to seek energy-dense food options such as fast food because of cheaper prices; however, to date the evidence has been inconsistent. This study examines both individual- and area-level socioeconomic characteristics and their independent associations with chain-brand fast food purchasing. Data from the 2003 Victorian Lifestyle and Neighbourhood Environments Study (VicLANES); a multilevel study of 2,547 adults from 49 small-areas in Melbourne, Australia, were used. Multilevel multinomial models adjusted for confounders were used to assess associations between individual socioeconomic position (education, occupation and income) and area socioeconomic characteristics in relation to fast food purchasing from five major fast food chains with outcome categories: never, at least monthly and at least weekly. The study finally assessed whether any potential area-level associations were mediated by fast food access. Increased fast food purchasing was independently associated with lower education, being a blue-collar employee and decreased household income. Results for area-level disadvantage were marginally insignificant after adjustment for individual-level characteristics, although they were suggestive that living in an area with greater levels of disadvantage increased an individual's odds of more frequent fast food purchasing. This effect was further attenuated when measures of fast food restaurant access were included in the models. Independent effects of lower individual-level socioeconomic characteristics and more frequent fast food purchasing for home consumption are demonstrated. Although evidence was suggestive of an independent association with area-level disadvantage this did not reach statistical significance.
Investigation of direct integrated optics modulators. [applicable to data preprocessors
NASA Technical Reports Server (NTRS)
Batchman, T. E.
1980-01-01
Direct modulation techniques applicable to integrated optics data preprocessors were investigated. Several methods of modulating a coherent optical beam by interaction with an incoherent beam were studied. It was decided to investigate photon induced conductivity changes in thin semiconductor cladding layers on optical waveguides. Preliminary calculations indicate significant changes can be produced in the phase shift in a propagating wave when the conductivity is changed by ten percent or more. Experimental devices to verify these predicted phase changes and experiments designed to prove the concept are described.
NASA Technical Reports Server (NTRS)
Verber, C. M.; Kenan, R. P.; Hartman, N. F.; Chapman, C. M.
1980-01-01
A laboratory model of a 16 channel integrated optical data preprocessor was fabricated and tested in response to a need for a device to evaluate the outputs of a set of remote sensors. It does this by accepting the outputs of these sensors, in parallel, as the components of a multidimensional vector descriptive of the data and comparing this vector to one or more reference vectors which are used to classify the data set. The comparison is performed by taking the difference between the signal and reference vectors. The preprocessor is wholly integrated upon the surface of a LiNbO3 single crystal with the exceptions of the source and the detector. He-Ne laser light is coupled in and out of the waveguide by prism couplers. The integrated optical circuit consists of a titanium infused waveguide pattern, electrode structures and grating beam splitters. The waveguide and electrode patterns, by virtue of their complexity, make the vector subtraction device the most complex integrated optical structure fabricated to date.
Deformation analysis of rotary combustion engine housings
NASA Technical Reports Server (NTRS)
Vilmann, Carl
1991-01-01
This analysis of the deformation of rotary combustion engine housings targeted the following objectives: (1) the development and verification of a finite element model of the trochoid housing, (2) the prediction of the stress and deformation fields present within the trochoid housing during operating conditions, and (3) the development of a specialized preprocessor which would shorten the time necessary for mesh generation of a trochoid housing's FEM model from roughly one month to approximately two man hours. Executable finite element models were developed for both the Mazda and the Outboard Marine Corporation trochoid housings. It was also demonstrated that a preprocessor which would hasten the generation of finite element models of a rotary engine was possible to develop. The above objectives are treated in detail in the attached appendices. The first deals with finite element modeling of a Wankel engine center housing, and the second with the development of a preprocessor that generates finite element models of rotary combustion engine center housings. A computer program, designed to generate finite element models of user defined rotary combustion engine center housing geometries, is also included.
Composite structural materials
NASA Technical Reports Server (NTRS)
Ansell, G. S.; Loewy, R. G.; Wiberley, S. E.
1979-01-01
A multifaceted program is described in which aeronautical, mechanical, and materials engineers interact to develop composite aircraft structures. Topics covered include: (1) the design of an advanced composite elevator and a proposed spar and rib assembly; (2) optimizing fiber orientation in the vicinity of heavily loaded joints; (3) failure mechanisms and delamination; (4) the construction of an ultralight sailplane; (5) computer-aided design; finite element analysis programs, preprocessor development, and array preprocessor for SPAR; (6) advanced analysis methods for composite structures; (7) ultrasonic nondestructive testing; (8) physical properties of epoxy resins and composites; (9) fatigue in composite materials, and (10) transverse thermal expansion of carbon/epoxy composites.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
Obesity and the Built Environment: Does the Density of Neighborhood Fast-Food Outlets Matter?
Li, Fuzhong; Harmer, Peter; Cardinal, Bradley J.; Bosworth, Mark; Johnson-Shelton, Deb
2009-01-01
Purpose To examine variation in obesity among older adults relative to the joint influences of density of neighborhood fast-food outlets and residents' behavioral, psychosocial, and sociodemographic characteristics. Design Cross-sectional and multilevel design. Setting Census block groups, used as a proxy for neighborhoods, within the metropolitan region's Urban Growth Boundary in Portland, Oregon. Subjects A total of 1,221 residents (mean age=65 years old) recruited randomly from 120 neighborhoods (48% response rate). Measures A Geographic Information System-based measure of fast-food restaurant density across 120 neighborhoods was created. Residents within the sampled neighborhoods were assessed with respect to their body mass index (BMI), frequency of visits to local fast-food restaurants, fried food consumption, levels of physical activity, self-efficacy of eating fruits and vegetables, household income, and race/ethnicity. Analyses Multilevel logistic regression analyses. Results Significant associations were found between resident-level individual characteristics and the likelihood of being obese (BMI≥30) for neighborhoods with a high-density of fast-food restaurants in comparison to those with a low density: odds ratios [OR] for obesity, 95% confidence interval [CI] were: 1.878 (CI=1.006-3.496) for weekly visits to local fast-food restaurants; 1.792 (CI=1.006-3.190) for not meeting physical activity recommendations; 1.212 (CI=1.057-1.391) for low confidence in eating healthy food; and 8.057 (CI=1.705-38.086) for non-Hispanic black residents. Conclusion Increased density of neighborhood fast-food outlets was associated with unhealthy lifestyles, poorer psychosocial profiles, and increased risk of obesity among older adults. PMID:19149426
Obesity and the built environment: does the density of neighborhood fast-food outlets matter?
Li, Fuzhong; Harmer, Peter; Cardinal, Bradley J; Bosworth, Mark; Johnson-Shelton, Deb
2009-01-01
Examine variation in obesity among older adults relative to the joint influences of density of neighborhood fast food outlets and residents' behavioral, psychosocial, and sociodemographic characteristics. Cross-sectional and multilevel design. Census block groups, used as a proxy for neighborhoods, within the metropolitan region's Urban Growth Boundary in Portland, Oregon. A total of 1221 residents (mean age, 65 years) recruited randomly from 120 neighborhoods (48% response rate). A geographic information system-based measure of fast food restaurant density across 120 neighborhoods was created. Residents within the sampled neighborhoods were assessed with respect to their body mass indices (BMI), frequency of visits to local fast food restaurants, fried food consumption, levels of physical activity, self-efficacy of eating fruits and vegetables, household income, and race/ethnicity. Multilevel logistic regression analyses. Significant associations were found between resident-level individual characteristics and the likelihood of being obese (BMI > or = 30) for neighborhoods with a high-density of fast food restaurants in comparison with those with a low density: odds ratios for obesity, 95% confidence intervals (CI), were 1.878 (CI, 1.006-3.496) for weekly visits to local fast food restaurants; 1.792 (CI, 1.006-3.190) for not meeting physical activity recommendations; 1.212 (CI, 1.057-1.391) for low confidence in eating healthy food; and 8.057 (CI, 1.705-38.086) for non-Hispanic black residents. Increased density of neighborhood fast food outlets was associated with unhealthy lifestyles, poorer psychosocial profiles, and increased risk of obesity among older adults.
A Chain of Modeling Tools For Gas and Aqueous Phase Chemstry
NASA Astrophysics Data System (ADS)
Audiffren, N.; Djouad, R.; Sportisse, B.
Atmospheric chemistry is characterized by the use of large set of chemical species and reactions. Handling with the set of data required for the definition of the model is a quite difficult task. We prsent in this short article a preprocessor for diphasic models (gas phase and aqueous phase in cloud droplets) named SPACK. The main interest of SPACK is the automatic generation of lumped species related to fast equilibria. We also developped a linear tangent model using the automatic differentiation tool named ODYSSEE in order to perform a sensitivity analysis of an atmospheric multi- phase mechanism based on RADM2 kinetic scheme.Local sensitivity coefficients are computed for two different scenarii. We focus in this study on the sensitivity of the ozone,NOx,HOx, system with respect to some aqueous phase reactions and we inves- tigate the influence of the reduction in the photolysis rates in the area below the cloud region.
Easy boundary definition for EGUN
NASA Astrophysics Data System (ADS)
Becker, R.
1989-06-01
The relativistic electron optics program EGUN [1] has reached a broad distribution, and many users have asked for an easier way of boundary input. A preprocessor to EGUN has been developed that accepts polygonal input of boundary points, and offers features such as rounding off of corners, shifting and squeezing of electrodes and simple input of slanted Neumann boundaries. This preprocessor can either be used on a PC that is linked to a mainframe using the FORTRAN version of EGUN, or in connection with the version EGNc, which also runs on a PC. In any case, direct graphic response on the PC greatly facilitates the creation of correct input files for EGUN.
Weighted graph cuts without eigenvectors a multilevel approach.
Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian
2007-11-01
A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.
Three-dimensional elliptic grid generation technique with application to turbomachinery cascades
NASA Technical Reports Server (NTRS)
Chen, S. C.; Schwab, J. R.
1988-01-01
Described is a numerical method for generating 3-D grids for turbomachinery computational fluid dynamic codes. The basic method is general and involves the solution of a quasi-linear elliptic partial differential equation via pointwise relaxation with a local relaxation factor. It allows specification of the grid point distribution on the boundary surfaces, the grid spacing off the boundary surfaces, and the grid orthogonality at the boundary surfaces. A geometry preprocessor constructs the grid point distributions on the boundary surfaces for general turbomachinery cascades. Representative results are shown for a C-grid and an H-grid for a turbine rotor. Two appendices serve as user's manuals for the basic solver and the geometry preprocessor.
Williams, Julianne; Scarborough, Peter; Townsend, Nick; Matthews, Anne; Burgoine, Thomas; Mumtaz, Lorraine; Rayner, Mike
2015-01-01
Researchers and policy-makers are interested in the influence that food retailing around schools may have on child obesity risk. Most previous research comes from North America, uses data aggregated at the school-level and focuses on associations between fast food outlets and school obesity rates. This study examines associations between food retailing and BMI among a large sample of primary school students in Berkshire, England. By controlling for individual, school and home characteristics and stratifying results across the primary school years, we aimed to identify if the food environment around schools had an effect on BMI, independent of socio-economic variables. We measured the densities of fast food outlets and food stores found within schoolchildren's home and school environments using Geographic Information Systems (GIS) and data from local councils. We linked these data to measures from the 2010/11 National Child Measurement Programme and used a cross-classified multi-level approach to examine associations between food retailing and BMI z-scores. Analyses were stratified among Reception (aged 4-5) and Year 6 (aged 10-11) students to measure associations across the primary school years. Our multilevel model had three levels to account for individual (n = 16,956), home neighbourhood (n = 664) and school (n = 268) factors. After controlling for confounders, there were no significant associations between retailing near schools and student BMI, but significant positive associations between fast food outlets in home neighbourhood and BMI z-scores. Year 6 students living in areas with the highest density of fast food outlets had an average BMI z-score that was 0.12 (95% CI: 0.04, 0.20) higher than those living in areas with none. We found little evidence to suggest that food retailing around schools influences student BMI. There is some evidence to suggest that fast food outlet densities in a child's home neighbourhood may have an effect on BMI, particularly among girls, but more research is needed to inform effective policies targeting the effects of the retail environment on child obesity.
Williams, Julianne; Scarborough, Peter; Townsend, Nick; Matthews, Anne; Burgoine, Thomas; Mumtaz, Lorraine; Rayner, Mike
2015-01-01
Introduction Researchers and policy-makers are interested in the influence that food retailing around schools may have on child obesity risk. Most previous research comes from North America, uses data aggregated at the school-level and focuses on associations between fast food outlets and school obesity rates. This study examines associations between food retailing and BMI among a large sample of primary school students in Berkshire, England. By controlling for individual, school and home characteristics and stratifying results across the primary school years, we aimed to identify if the food environment around schools had an effect on BMI, independent of socio-economic variables. Methods We measured the densities of fast food outlets and food stores found within schoolchildren’s home and school environments using Geographic Information Systems (GIS) and data from local councils. We linked these data to measures from the 2010/11 National Child Measurement Programme and used a cross-classified multi-level approach to examine associations between food retailing and BMI z-scores. Analyses were stratified among Reception (aged 4-5) and Year 6 (aged 10-11) students to measure associations across the primary school years. Results Our multilevel model had three levels to account for individual (n = 16,956), home neighbourhood (n = 664) and school (n = 268) factors. After controlling for confounders, there were no significant associations between retailing near schools and student BMI, but significant positive associations between fast food outlets in home neighbourhood and BMI z-scores. Year 6 students living in areas with the highest density of fast food outlets had an average BMI z-score that was 0.12 (95% CI: 0.04, 0.20) higher than those living in areas with none. Discussion We found little evidence to suggest that food retailing around schools influences student BMI. There is some evidence to suggest that fast food outlet densities in a child’s home neighbourhood may have an effect on BMI, particularly among girls, but more research is needed to inform effective policies targeting the effects of the retail environment on child obesity. PMID:26186610
NASA Astrophysics Data System (ADS)
Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon
2016-01-01
Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07377d
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Fast multilevel radiative transfer
NASA Astrophysics Data System (ADS)
Paletou, Frédéric; Léger, Ludovick
2007-01-01
The vast majority of recent advances in the field of numerical radiative transfer relies on approximate operator methods better known in astrophysics as Accelerated Lambda-Iteration (ALI). A superior class of iterative schemes, in term of rates of convergence, such as Gauss-Seidel and Successive Overrelaxation methods were therefore quite naturally introduced in the field of radiative transfer by Trujillo Bueno & Fabiani Bendicho (1995); it was thoroughly described for the non-LTE two-level atom case. We describe hereafter in details how such methods can be generalized when dealing with non-LTE unpolarised radiation transfer with multilevel atomic models, in monodimensional geometry.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Detrixhe, Miles; Gibou, Frédéric
2016-10-01
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486
Aryanto, K Y E; Broekema, A; Langenhuysen, R G A; Oudkerk, M; van Ooijen, P M A
2015-05-01
To develop and test a fast and easy rule-based web-environment with optional de-identification of imaging data to facilitate data distribution within a hospital environment. A web interface was built using Hypertext Preprocessor (PHP), an open source scripting language for web development, and Java with SQL Server to handle the database. The system allows for the selection of patient data and for de-identifying these when necessary. Using the services provided by the RSNA Clinical Trial Processor (CTP), the selected images were pushed to the appropriate services using a protocol based on the module created for the associated task. Five pipelines, each performing a different task, were set up in the server. In a 75 month period, more than 2,000,000 images are transferred and de-identified in a proper manner while 20,000,000 images are moved from one node to another without de-identification. While maintaining a high level of security and stability, the proposed system is easy to setup, it integrate well with our clinical and research practice and it provides a fast and accurate vendor-neutral process of transferring, de-identifying, and storing DICOM images. Its ability to run different de-identification processes in parallel pipelines is a major advantage in both clinical and research setting.
Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki
2012-01-01
For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.
FBILI method for multi-level line transfer
NASA Astrophysics Data System (ADS)
Kuzmanovska, O.; Atanacković, O.; Faurobert, M.
2017-07-01
Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.
Imaging Electron Spectrometer (IES) Electron Preprocessor (EPP) Design
NASA Technical Reports Server (NTRS)
Fennell, J. F.; Osborn, J. V.; Christensen, John L. (Technical Monitor)
2001-01-01
The Aerospace Corporation developed the Electron PreProcessor (EPP) to support the Imaging Electron Spectrometer (IES) that is part of the RAPID experiment on the ESA/NASA CLUSTER mission. The purpose of the EPP is to collect raw data from the IES and perform processing and data compression on it before transferring it to the RAPID microprocessor system for formatting and transmission to the CLUSTER satellite data system. The report provides a short history of the RAPID and CLUSTER programs and describes the EPP design. Four EPP units were fabricated, tested, and delivered for the original CLUSTER program. These were destroyed during a launch failure. Four more EPP units were delivered for the CLUSTER II program. These were successfully launched and are operating nominally on orbit.
Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome
2013-06-01
A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.
Hyun, Seung; Kwon, Owoong; Lee, Bom-Yi; Seol, Daehee; Park, Beomjin; Lee, Jae Yong; Lee, Ju Hyun; Kim, Yunseok; Kim, Jin Kon
2016-01-21
Multiple data writing-based multi-level non-volatile memory has gained strong attention for next-generation memory devices to quickly accommodate an extremely large number of data bits because it is capable of storing multiple data bits in a single memory cell at once. However, all previously reported devices have failed to store a large number of data bits due to the macroscale cell size and have not allowed fast access to the stored data due to slow single data writing. Here, we introduce a novel three-dimensional multi-floor cascading polymeric ferroelectric nanostructure, successfully operating as an individual cell. In one cell, each floor has its own piezoresponse and the piezoresponse of one floor can be modulated by the bias voltage applied to the other floor, which means simultaneously written data bits in both floors can be identified. This could achieve multi-level memory through a multiple data writing process.
Multiplexed Oversampling Digitizer in 65 nm CMOS for Column-Parallel CCD Readout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grace, Carl; Walder, Jean-Pierre; von der Lippe, Henrik
2012-04-10
A digitizer designed to read out column-parallel charge-coupled devices (CCDs) used for high-speed X-ray imaging is presented. The digitizer is included as part of the High-Speed Image Preprocessor with Oversampling (HIPPO) integrated circuit. The digitizer module comprises a multiplexed, oversampling, 12-bit, 80 MS/s pipelined Analog-to-Digital Converter (ADC) and a bank of four fast-settling sample-and-hold amplifiers to instrument four analog channels. The ADC multiplexes and oversamples to reduce its area to allow integration that is pitch-matched to the columns of the CCD. Novel design techniques are used to enable oversampling and multiplexing with a reduced power penalty. The ADC exhibits 188more » ?V-rms noise which is less than 1 LSB at a 12-bit level. The prototype is implemented in a commercially available 65 nm CMOS process. The digitizer will lead to a proof-of-principle 2D 10 Gigapixel/s X-ray detector.« less
Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less
The 28-entity IGES test file results using ComputerVision CADDS 4X
NASA Technical Reports Server (NTRS)
Kuan, Anchyi; Shah, Saurin; Smith, Kevin
1987-01-01
The investigation was based on the following steps: (1) Read the 28 Entity IGES (Initial Graphics Exchange Specification) Test File into the CAD data base with the IGES post-processor; (2) Make the modifications to the displayed geometries, which should produce the normalized front view and the drawing entity defined display; (3) Produce the drawing entity defined display of the file as it appears in the CAD system after modification to the geometry; (4) Translate the file back to IGES format using IGES pre-processor; (5) Read the IGES file produced by the pre-processor back into the CAD data base; (6) Produce another drawing entity defined display of the CAD display; and (7) Compare the plots resulting from steps 3 and 6 - they should be identical to each other.
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.
1978-01-01
The cloud-rise preprocessor and multilayer diffusion computer programs were used by NASA in predicting concentrations and dosages downwind from normal and abnormal launches of rocket vehicles. These programs incorporated: (1) the latest data for the heat content and chemistry of rocket exhaust clouds; (2) provision for the automated calculation of surface water pH due to deposition of HCl from precipitation scavenging; (3) provision for automated calculation of concentration and dosage parameters at any level within the vertical grounds for which meteorological inputs have been specified; and (4) provision for execution of multiple cases of meteorological data. Procedures used to automatically calculate wind direction shear in a layer were updated.
Bernsdorf, Kamille Almer; Lau, Cathrine Juel; Andreasen, Anne Helms; Toft, Ulla; Lykke, Maja; Glümer, Charlotte
2017-11-01
Literature suggests that people living in areas with a wealth of unhealthy fast food options may show higher levels of fast food intake. Multilevel logistic regression analyses were applied to examine the association between GIS-located fast food outlets (FFOs) and self-reported fast food intake among adults (+ 16 years) in the Capital Region of Denmark (N = 48,305). Accessibility of FFOs was measured both as proximity (distance to nearest FFO) and density (number of FFOs within a 1km network buffer around home). Odds of fast food intake ≥ 1/week increased significantly with increasing FFO density and decreased significantly with increasing distance to the nearest FFO for distances ≤ 4km. For long distances (>4km), odds increased with increasing distance, although this applied only for car owners. Results suggest that Danish health promotion strategies need to consider the contribution of the built environment to unhealthy eating. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Xiaoyong; van der Lans, Ivo; Dagevos, Hans
2012-01-01
To simultaneously identify consumer segments based on individual-level consumption and community-level food retail environment data and to investigate whether the segments are associated with BMI and dietary knowledge in China. A multilevel latent class cluster model was applied to identify consumer segments based not only on their individual preferences for fast food, salty snack foods, and soft drinks and sugared fruit drinks, but also on the food retail environment at the community level. The data came from the China Health and Nutrition Survey (CHNS) conducted in 2006 and two questionnaires for adults and communities were used. A total sample of 9788 adults living in 218 communities participated in the CHNS. We successfully identified four consumer segments. These four segments were embedded in two types of food retail environment: the saturated food retail environment and the deprived food retail environment. A three-factor solution was found for consumers' dietary knowledge. The four consumer segments were highly associated with consumers' dietary knowledge and a number of sociodemographic variables. The widespread discussion about the relationships between fast-food consumption and overweight/obesity is irrelevant for Chinese segments that do not have access to fast food. Factors that are most associated with segments with a higher BMI are consumers' (incorrect) dietary knowledge, the food retail environment and sociodemographics. The results provide valuable insight for policy interventions on reducing overweight/obesity in China. This study also indicates that despite the breathtaking changes in modern China, the impact of 'obesogenic' environments should not be assessed too strictly from a 'Western' perspective.
Noise Suppression Methods for Robust Speech Processing
1981-04-01
1]. Techniques available for voice processor modification to account for noise contamination are being developed [4]. Preprocessor noise reduction...analysis window function. Principles governing discrete implementation of the transform pair are discussed, and relationships are formalized which specify
Development of a speech autocuer
NASA Astrophysics Data System (ADS)
Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; McCartney, M. L.
1980-12-01
A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.
Development of a speech autocuer
NASA Technical Reports Server (NTRS)
Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.
1980-01-01
A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.
Zhang, Qi-Jian; Miao, Shi-Feng; Li, Hua; He, Jing-Hui; Li, Na-Jun; Xu, Qing-Feng; Chen, Dong-Yun; Lu, Jian-Mei
2017-06-19
Small-molecule-based multilevel memory devices have attracted increasing attention because of their advantages, such as super-high storage density, fast reading speed, light weight, low energy consumption, and shock resistance. However, the fabrication of small-molecule-based devices always requires expensive vacuum-deposition techniques or high temperatures for spin-coating. Herein, through rational tailoring of a previous molecule, DPCNCANA (4,4'-(6,6'-bis(2-octyl-1,3-dioxo-2,3-dihydro-1H-benzo[de]isoquinolin-6-yl)-9H,9'H-[3,3'-bicarbazole]-9,9'-diyl)dibenzonitrile), a novel bat-shaped A-D-A-type (A-D-A=acceptor-donor-acceptor) symmetric framework has been successfully synthesized and can be dissolved in common solvents at room temperature. Additionally, it has a low-energy bandgap and dense intramolecular stacking in the film state. The solution-processed memory devices exhibited high-performance nonvolatile multilevel data-storage properties with low switching threshold voltages of about -1.3 and -2.7 V, which is beneficial for low power consumption. Our result should prompt the study of highly efficient solution-processed multilevel memory devices in the field of organic electronics. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Zapp, John; Hsa, Chang-Yu; Volakis, John L.
1990-01-01
An extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation (FFT) is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. By virtue of the finite element method, the algorithm is applicable to structures of arbitrary material composition. Several improvements to the two dimensional algorithm are also described. These include: (1) modifications for terminating the mesh at circular boundaries without distorting the convolutionality of the boundary integrals; (2) the development of nonproprietary mesh generation routines for two dimensional applications; (3) the development of preprocessors for interfacing SDRC IDEAS with the main algorithm; and (4) the development of post-processing algorithms based on the public domain package GRAFIC to generate two and three dimensional gray level and color field maps.
NASA software specification and evaluation system design, part 1
NASA Technical Reports Server (NTRS)
1976-01-01
The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.
NASA Technical Reports Server (NTRS)
Lakeotes, Christopher D.
1990-01-01
DEVECT (CYBER-205 Devectorizer) is CYBER-205 FORTRAN source-language-preprocessor computer program reducing vector statements to standard FORTRAN. In addition, DEVECT has many other standard and optional features simplifying conversion of vector-processor programs for CYBER 200 to other computers. Written in FORTRAN IV.
Fault diagnosis method based on FFT-RPCA-SVM for Cascaded-Multilevel Inverter.
Wang, Tianzhen; Qi, Jie; Xu, Hao; Wang, Yide; Liu, Lei; Gao, Diju
2016-01-01
Thanks to reduced switch stress, high quality of load wave, easy packaging and good extensibility, the cascaded H-bridge multilevel inverter is widely used in wind power system. To guarantee stable operation of system, a new fault diagnosis method, based on Fast Fourier Transform (FFT), Relative Principle Component Analysis (RPCA) and Support Vector Machine (SVM), is proposed for H-bridge multilevel inverter. To avoid the influence of load variation on fault diagnosis, the output voltages of the inverter is chosen as the fault characteristic signals. To shorten the time of diagnosis and improve the diagnostic accuracy, the main features of the fault characteristic signals are extracted by FFT. To further reduce the training time of SVM, the feature vector is reduced based on RPCA that can get a lower dimensional feature space. The fault classifier is constructed via SVM. An experimental prototype of the inverter is built to test the proposed method. Compared to other fault diagnosis methods, the experimental results demonstrate the high accuracy and efficiency of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Singla, Neeru; Srivastava, Vishal; Mehta, Dalip Singh
2018-05-01
Malaria is a life-threatening infectious blood disease affecting humans and other animals caused by parasitic protozoans belonging to the Plasmodium type especially in developing countries. The gold standard method for the detection of malaria is through the microscopic method of chemically treated blood smears. We developed an automated optical spatial coherence tomographic system using a machine learning approach for a fast identification of malaria cells. In this study, 28 samples (15 healthy, 13 malaria infected stages of red blood cells) were imaged by the developed system and 13 features were extracted. We designed a multilevel ensemble-based classifier for the quantitative prediction of different stages of the malaria cells. The proposed classifier was used by repeating k-fold cross validation dataset and achieve a high-average accuracy of 97.9% for identifying malaria infected late trophozoite stage of cells. Overall, our proposed system and multilevel ensemble model has a substantial quantifiable potential to detect the different stages of malaria infection without staining or expert. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
ERIC Educational Resources Information Center
Computer Symbolic, Inc., Washington, DC.
A pseudo assembly language, PAL, was developed and specified for use as the lowest level in a general, multilevel programing system for the realization of cost-effective, hardware-independent Naval software. The language was developed as part of the system called FIRMS (Fast Iterative Recursive Macro System) and is sufficiently general to allow…
Reilly, Thomas E.; Harbaugh, Arlen W.
1993-01-01
Cylindrical (axisymmetric) flow to a well is an important specialized topic of ground-water hydraulics and has been applied by many investigators to determine aquifer properties and determine heads and flows in the vicinity of the well. A recent modification to the U.S. Geological Survey Modular Three-Dimensional Finite-Difference Ground-Water Flow Model provides the opportunity to simulate axisymmetric flow to a well. The theory involves the conceptualization of a system of concentric shells that are capable of reproducing the large variations in gradient in the vicinity of the well by decreasing their area in the direction of the well. The computer program presented serves as a preprocessor to the U.S. Geological Survey model by creating the input data file needed to implement the axisymmetric conceptualization. Data input requirements to this preprocessor are described, and a comparison with a known analytical solution indicates that the model functions appropriately.
Neural network system for purposeful behavior based on foveal visual preprocessor
NASA Astrophysics Data System (ADS)
Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.
1996-10-01
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Using PAFEC as a preprocessor for COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Gray, W. H.; Baudry, T. V.
1983-01-01
Programs for Automatic Finite Element Calculations (PAFEC) is a general purpose, three dimensional linear and nonlinear finite element program (ref. 1). PAFEC's features include free format input utilizing engineering keywords, powerful mesh generating facilities, sophisticated data base management procedures, and extensive data validation checks. Presented here is a description of a software interface that permits PAFEC to be used as a preprocessor for COSMIC/NASTRAN. This user friendly software, called PAFCOS, frees the stress analyst from the laborious and error prone procedure of creating and debugging a rigid format COSMIC/NASTRAN bulk data deck. By interactively creating and debugging a finite element model with PAFEC, thus taking full advantage of the free format engineering keyword oriented data structure of PAFEC, the amount of time spent during model generation can be drastically reduced. The PAFCOS software will automatically convert a PAFEC data structure into a COSMIC/NASTRAN bulk data deck. The capabilities and limitations of the PAFCOS software are fully discussed in the following report.
Definition of NASTRAN sets by use of parametric geometry
NASA Technical Reports Server (NTRS)
Baughn, Terry V.; Tiv, Mehran
1989-01-01
Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.
Assimilator Ensemble Post-processor (EnsPost) Hydrologic Model Output Statistics (HMOS) Ensemble Verification capabilities (see diagram below): the Ensemble Pre-processor, the Ensemble Post-processor, the Hydrologic Model (OpenDA, http://www.openda.org/joomla/index.php) to be used within the CHPS environment. Ensemble Post
1980-08-01
tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case
Enhancements to AERMOD’s Building Downwash Algorithms based on Wind Tunnel and Embedded-LES Modeling
This presentation presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD’s building pre-processor to better represent elongated buildings in oblique wind...
An update to CMAQ's Meteorology/Chemistry Interface Processor Version 2 (MCIP2) will be released in August 2004 in conjunction with the next public release of the CMAQ model. MCIP2 is the pre-processor in the CMAQ system that is typically used to perform off-line linkage between...
A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data
NASA Astrophysics Data System (ADS)
Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing
2017-10-01
Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.
Wu, Weihua; Chen, Shiyu; Zhai, Jiwei; Liu, Xinyi; Lai, Tianshu; Song, Sannian; Song, Zhitang
2017-10-06
Superlattice-like Ge 50 Te 50 /Ge 8 Sb 92 (SLL GT/GS) thin film was systematically investigated for multi-level storage and ultra-fast switching phase-change memory application. In situ resistance measurement indicates that SLL GT/GS thin film exhibits two distinct resistance steps with elevated temperature. The thermal stability of the amorphous state and intermediate state were evaluated with the Kissinger and Arrhenius plots. The phase-structure evolution revealed that the amorphous SLL GT/GS thin film crystallized into rhombohedral Sb phase first, then the rhombohedral GeTe phase. The microstructure, layered structure, and interface stability of SLL GT/GS thin film was confirmed by using transmission electron microscopy. The transition speed of crystallization and amorphization was measured by the picosecond laser pump-probe system. The volume variation during the crystallization was obtained from x-ray reflectivity. Phase-change memory (PCM) cells based on SLL GT/GS thin film were fabricated to verify the multi-level switching under an electrical pulse as short as 30 ns. These results illustrate that the SLL GT/GS thin film has great potentiality in high-density and high-speed PCM applications.
NASA Astrophysics Data System (ADS)
Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca
2017-07-01
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
REGIONAL OXIDANT MODEL (ROM) USER'S GUIDE, PART 1: THE ROM PREPROCESSORS
The Regional Oxidant Model (ROM) determines hourly concentrations and fates of zone and 34 other chemical species over a scale of 1000 km x 1000 km for ozone "episodes" of up to one month's duration. he model structure, based on phenomenological concepts, consists of 3 1/2 layers...
Regional Impacts of extending inorganic and organic cloud chemistry with AQCHEM-KMT
Starting with CMAQ version 5.1, AQCHEM-KMT has been offered as a readily expandable option for cloud chemistry via application of the Kinetic PreProcessor (KPP). AQCHEM-KMT treats kinetic mass transfer between the gas and aqueous phases, ionization, chemical kinetics, droplet sc...
Imrovement of the cold forming technology of the parts such as longeron
NASA Astrophysics Data System (ADS)
Kashapova, L. R.; Pankratov, D. L.; Bilyalova, A. A.
2014-12-01
As a result of modeling in LS-PREPOST preprocessor of the program LS-DYNA a range of radii of curvature edge transition matrix (27,5 ° <= R <= 48 °) is obtained, which allows to produce defect-free stamping slots for longeron shock absorbers of tractors KAMAZ-5460.
Structured FORTRAN Preprocessor
NASA Technical Reports Server (NTRS)
Flynn, J. A.; Lawson, C. L.; Van Snyder, W.; Tsitsivas, H. N.
1985-01-01
SFTRAN3 supports structured programing in FORTRAN environment. Language intended particularly to support two aspects of structured programing -- nestable single-entry control structures and modularization and top-down organization of code. Code designed and written using these SFTRAN3 facilities have fewer initial errors, easier to understand and less expensive to maintain and modify.
Voice Preprocessor for Digital Voice Applications
1989-09-11
helit tralnsforniers are marketed for use kll ith multitonle MI( LMS and are acceptable for w ice appl ica- tiotns. 3. Automatic Gain (iontro: A...variations of speech spectral tilt to improve the quaiit\\ of the extracted speech parameters. Nlore imnportantly, the onlN analoii circuit "e use is a
Provision of Information to the Research Staff.
ERIC Educational Resources Information Center
Williams, Martha E.
The Information Sciences section at Illinois Institute of Technology Research Institute (IITRI) is now operating a Computer Search Center (CSC) for handling numerous machine-readable data bases. The computer programs are generalized in the sense that they will handle any incoming data base. This is accomplished by means of a preprocessor system…
A closed-loop multi-level model of glucose homeostasis
Uluseker, Cansu; Simoni, Giulia; Dauriz, Marco; Matone, Alice
2018-01-01
Background The pathophysiologic processes underlying the regulation of glucose homeostasis are considerably complex at both cellular and systemic level. A comprehensive and structured specification for the several layers of abstraction of glucose metabolism is often elusive, an issue currently solvable with the hierarchical description provided by multi-level models. In this study we propose a multi-level closed-loop model of whole-body glucose homeostasis, coupled with the molecular specifications of the insulin signaling cascade in adipocytes, under the experimental conditions of normal glucose regulation and type 2 diabetes. Methodology/Principal findings The ordinary differential equations of the model, describing the dynamics of glucose and key regulatory hormones and their reciprocal interactions among gut, liver, muscle and adipose tissue, were designed for being embedded in a modular, hierarchical structure. The closed-loop model structure allowed self-sustained simulations to represent an ideal in silico subject that adjusts its own metabolism to the fasting and feeding states, depending on the hormonal context and invariant to circadian fluctuations. The cellular level of the model provided a seamless dynamic description of the molecular mechanisms downstream the insulin receptor in the adipocytes by accounting for variations in the surrounding metabolic context. Conclusions/Significance The combination of a multi-level and closed-loop modeling approach provided a fair dynamic description of the core determinants of glucose homeostasis at both cellular and systemic scales. This model architecture is intrinsically open to incorporate supplementary layers of specifications describing further individual components influencing glucose metabolism. PMID:29420588
Nonlinear Analysis of Squeeze Film Dampers Applied to Gas Turbine Helicopter Engines.
1980-11-01
calculate the stability (complex roots) of a multi-level gas turbine with aero- dynamic excitation. This program has been applied to the space shuttle...such phenomena as oil film whirl. This paper devlops an analysis technique incorporating modal analysis and fast Fourier transform tech- niques to...USING A SQUEEZE FILM BEARING By M. A. Simpson Research Engineer L. E. Barrett Reserach Assistant Professor Department of Mechanical and Aerospace
Antiferromagnetic CuMnAs multi-level memory cell with microelectronic compatibility
NASA Astrophysics Data System (ADS)
Olejník, K.; Schuler, V.; Marti, X.; Novák, V.; Kašpar, Z.; Wadley, P.; Campion, R. P.; Edmonds, K. W.; Gallagher, B. L.; Garces, J.; Baumgartner, M.; Gambardella, P.; Jungwirth, T.
2017-05-01
Antiferromagnets offer a unique combination of properties including the radiation and magnetic field hardness, the absence of stray magnetic fields, and the spin-dynamics frequency scale in terahertz. Recent experiments have demonstrated that relativistic spin-orbit torques can provide the means for an efficient electric control of antiferromagnetic moments. Here we show that elementary-shape memory cells fabricated from a single-layer antiferromagnet CuMnAs deposited on a III-V or Si substrate have deterministic multi-level switching characteristics. They allow for counting and recording thousands of input pulses and responding to pulses of lengths downscaled to hundreds of picoseconds. To demonstrate the compatibility with common microelectronic circuitry, we implemented the antiferromagnetic bit cell in a standard printed circuit board managed and powered at ambient conditions by a computer via a USB interface. Our results open a path towards specialized embedded memory-logic applications and ultra-fast components based on antiferromagnets.
This paper describes the development and implementation of an extendable aqueous-phase chemistry option (AQCHEM − KMT(I)) for the Community Multiscale Air Quality (CMAQ) modeling system, version 5.1. Here, the Kinetic PreProcessor (KPP), version 2.2.3, is used t...
Project Planning and Reporting
NASA Technical Reports Server (NTRS)
1982-01-01
Project Planning Analysis and Reporting System (PPARS) is automated aid in monitoring and scheduling of activities within project. PPARS system consists of PPARS Batch Program, five preprocessor programs, and two post-processor programs. PPARS Batch program is full CPM (Critical Path Method) scheduling program with resource capabilities. Can process networks with up to 10,000 activities.
Transient Analysis Generator /TAG/ simulates behavior of large class of electrical networks
NASA Technical Reports Server (NTRS)
Thomas, W. J.
1967-01-01
Transient Analysis Generator program simulates both transient and dc steady-state behavior of a large class of electrical networks. It generates a special analysis program for each circuit described in an easily understood and manipulated programming language. A generator or preprocessor and a simulation system make up the TAG system.
REGIONAL OXIDANT MODEL (ROM) USER'S GUIDE, PART 2: THE ROM PREPROCESSOR NETWORK
The Regional Oxidant Model (ROM) determines hourly concentrations and fates of zone and 34 other chemical species over a scale of 1000 km x 1000 km for ozone "episodes" of up to one month's duration. he model structure, based on phenomenological concepts, consists of 3 1/2 layers...
The Psychometric Toolbox: An Excel Package for Use in Measurement and Psychometrics Courses
ERIC Educational Resources Information Center
Ferrando, Pere J.; Masip-Cabrera, Antoni; Navarro-González, David; Lorenzo-Seva, Urbano
2017-01-01
The Psychometric Toolbox (PT) is a user-friendly, non-commercial package mainly intended to be used for instructional purposes in introductory courses of educational and psychological measurement, psychometrics and statistics. The PT package is organized in six separate modules or sub-programs: Data preprocessor (descriptive analyses and data…
NASA Technical Reports Server (NTRS)
Sainsbury-Carter, J. B.; Conaway, J. H.
1973-01-01
The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
An investigation for the development of an integrated optical data preprocessor
NASA Technical Reports Server (NTRS)
Verber, C. M.; Vahey, D. W.; Kenan, R. P.; Wood, V. E.; Hartman, N. F.; Chapman, C. M.
1978-01-01
The successful fabrication and demonstration of an integrated optical circuit designed to perform a parallel processing operation by utilizing holographic subtraction to simultaneously compare N analog signal voltages with N predetermined reference voltages is summarized. The device alleviates transmission, storage and processing loads of satellite data systems by performing, at the sensor site, some preprocessing of data taken by remote sensors. Major accomplishments in the fabrication of integrated optics components include: (1) fabrication of the first LiNbO3 waveguide geodesic lens; (2) development of techniques for polishing TIR mirrors on LiNbO3 waveguides; (3) fabrication of high efficiency metal-over-photoresist gratings for waveguide beam splitters; (4) demonstration of high S/N holographic subtraction using waveguide holograms; and (5) development of alignment techniques for fabrication of integrated optics circuits. Important developments made in integrated optics are the discovery and suggested use of holographic self-subtraction in LiNbO3, development of a mathematical description of the operating modes of the preprocessor, and the development of theories for diffraction efficiency and beam quality of two dimensional beam defined gratings.
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.
A Preprocessor for Modeling Nonpoint Sources in Fractured Media using MODFLOW and MT3D
NASA Astrophysics Data System (ADS)
Mun, Y.; Uchrin, C. G.
2002-05-01
There are a multitude of fractures in the geological structure of fractured media which act as conduits for subsurface fluid flow. The hydraulic properties of this flow are very heterogeneous even within a single unit and this heterogeneity is very localized. As a result, modeling flow in fractured media is difficult due to this heterogeneity. There are two major approaches to simulate the flow and transport of fluid flow in fractured media: the discrete fracture approach and the continuum approach. Precise characteristics such as geometry are required to use the discrete fracture approach. It, however, is difficult to determine the fluid flow through the fractures because of inaccessibility. In the continuum approach, although head distributions can match to well data, chemical concentration distributions are hard to match well sample concentration observations, because some aquifers are dominated by advective transport and others are likely to serve as reservoirs for immobile solutes. The MODFLOW preprocessor described in this paper has been developed and applied to the Cranberry Lake system in Northwestern New Jersey. Cranberry Lake has exhibited eutrophic characteristics for some time by nonpoint sources including surface water runoff, leaching from local septic systems and direct deposition. It has been estimated that 70% of the nutrient loading to the lake flows through fractured media from septic systems. The preprocessor presented in this paper utilizes percolation theory, which is concerned with the existence of ropen paths_. The percolation threshold of a body-centered cubic lattice (3D), a square lattice (2D) and several other percolation numbers are applied to make the model system represent the fractured media. The distribution of hydraulic head within groundwater is simulated by MODFLOW and the advection-dispersion equation of nitrate transport is solved by MT3D. This study also simulates boron transport as an indicator.
Fast Multilevel Solvers for a Class of Discrete Fourth Order Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Bin; Chen, Luoping; Hu, Xiaozhe
2016-03-05
In this paper, we study fast iterative solvers for the solution of fourth order parabolic equations discretized by mixed finite element methods. We propose to use consistent mass matrix in the discretization and use lumped mass matrix to construct efficient preconditioners. We provide eigenvalue analysis for the preconditioned system and estimate the convergence rate of the preconditioned GMRes method. Furthermore, we show that these preconditioners only need to be solved inexactly by optimal multigrid algorithms. Our numerical examples indicate that the proposed preconditioners are very efficient and robust with respect to both discretization parameters and diffusion coefficients. We also investigatemore » the performance of multigrid algorithms with either collective smoothers or distributive smoothers when solving the preconditioner systems.« less
NASA Astrophysics Data System (ADS)
Jorba, O.; Pérez, C.; Karsten, K.; Janjic, Z.; Dabdub, D.; Baldasano, J. M.
2009-09-01
This contribution presents the ongoing developments of a new fully on-line chemical weather prediction system for meso to global scale applications. The modeling system consists of a mineral dust module and a gas-phase chemistry module coupled on-line to a unified global-regional atmospheric driver. This approach allows solving small scale processes and their interactions at local to global scales. Its unified environment maintains the consistency of all the physico-chemical processes involved. The atmospheric driver is the NCEP/NMMB numerical weather prediction model (Janjic and Black, 2007) developed at National Centers for Environmental Prediction (NCEP). It represents an evolution of the operational WRF-NMME model extending from meso to global scales. Its unified non-hydrostatic dynamical core supports regional and global simulations. The Barcelona Supercomputing Center is currently designing and implementing a chemistry transport model coupled online with the new global/regional NMMB. The new modeling system is intended to be a powerful tool for research and to provide efficient global and regional chemical weather forecasts at sub-synoptic and mesoscale resolutions. The online coupling of the chemistry follows the approach similar to that of the mineral dust module already coupled to the atmospheric driver, NMMB/BSC-DUST (Pérez et al., 2008). Chemical species are advected and mixed at the corresponding time steps of the meteorological tracers using the same numerical scheme. Advection is eulerian, positive definite and monotone. The chemical mechanism and chemistry solver is based on the Kinetic PreProcessor KPP (Damian et al., 2002) package with the main purpose of maintaining a wide flexibility when configuring the model. Such approach will allow using a simplified chemical mechanism for global applications or a more complete mechanism for high-resolution local or regional studies. Moreover, it will permit the implementation of a specific configuration for forecasting applications in regional or global domains. An emission process allows the coupling of different emission inventories sources such as RETRO, EDGAR and GEIA for the global domain, EMEP for Europe and HERMES for Spain. The photolysis scheme is based on the Fast-J scheme, coupled with physics of each model layer (e.g., aerosols, clouds, absorbers as ozone) and it considers grid-scale clouds from the atmospheric driver. The dry deposition scheme follows the deposition velocity analogy for gases, enabling the calculation of deposition fluxes from airborne concentrations. No cloud-chemistry processes are included in the system yet (no wet deposition, scavenging and aqueous chemistry). The modeling system developments will be presented and first results of the gas-phase chemistry at global scale will be discussed. REFERENCES Janjic, Z.I., and Black, T.L., 2007. An ESMF unified model for a broad range of spatial and temporal scales, Geophysical Research Abstracts, 9, 05025. Pérez, C., Haustein, K., Janjic, Z.I., Jorba, O., Baldasano, J.M., Black, T.L., and Nickovic, S., 2008. An online dust model within the meso to global NMMB: current progress and plans. AGU Fall Meeting, San Francisco, A41K-03, 2008. Damian, V., Sandu, A., Damian, M., Potra, F., and Carmichael, G.R., 2002. The kinetic preprocessor KPP - A software environment for solving chemical kinetics. Comp. Chem. Eng., 26, 1567-1579. Sandu, A., and Sander, R., 2006. Technical note:Simulating chemical systems in Fortran90 and Matlab with the Kinetic PreProcessor KPP-2.1. Atmos. Chem. and Phys., 6, 187-195.
Anderson, David F; Yuan, Chaojie
2018-04-18
A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.
Body mass index, neighborhood fast food and restaurant concentration, and car ownership.
Inagami, Sanae; Cohen, Deborah A; Brown, Arleen F; Asch, Steven M
2009-09-01
Eating away from home and particularly fast food consumption have been shown to contribute to weight gain. Increased geographic access to fast food outlets and other restaurants may contribute to higher levels of obesity, especially in individuals who rely largely on the local environment for their food purchases. We examined whether fast food and restaurant concentrations are associated with body mass index and whether car ownership might moderate this association. We linked the 2000 US Census data and information on locations of fast food and other restaurants with the Los Angeles Family and Neighborhood Study database, which consists of 2,156 adults sampled from 63 neighborhoods in Los Angeles County. Multilevel modeling was used to estimate associations between body mass index (BMI), fast food and restaurant concentration, and car ownership after adjustment for individual-level factors and socioeconomic characteristics of residential neighborhoods. A high concentration of local restaurants is associated with BMI. Car owners have higher BMIs than non-car owners; however, individuals who do not own cars and reside in areas with a high concentration of fast food outlets have higher BMIs than non-car owners who live in areas with no fast food outlets, approximately 12 lb more (p = 0.02) for an individual with a height of 5 ft. 5 in. Higher restaurant density is associated with higher BMI among local residents. The local fast food environment has a stronger association with BMI for local residents who do not have access to cars.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei
2013-04-01
This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.
FY 1992-1993 RDT&E Descriptive Summaries: DARPA
1991-02-01
combining natural language and user workflow model information. * Determine effectiveness of auditory models as preprocessors for robust speech...for indexing and retrieving design knowledge. * Evaluate ability of message understanding systems to extract crisis -situation data from news wires...energy effects , underwater vehicles, neutrino detection, speech, tailored nuclear weapons, hypervelocity, nanosecond timing, and MAD/RPV. FY 1991 Planned
This paper describes the development and implementation of an extendable aqueous-phase chemistry option (AQCHEM − KMT(I)) for the Community Multiscale Air Quality (CMAQ) modeling system, version 5.1. Here, the Kinetic PreProcessor (KPP), version 2.2.3, is used to generate a Rosen...
UNIX as an environment for producing numerical software
NASA Technical Reports Server (NTRS)
Schryer, N. L.
1978-01-01
The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.
Sensor Agent Processing Software (SAPS)
2004-05-01
buildings, sewers, and tunnels. The time scale governs many aspects of tactical sensing. In high intensity combat situations forces move within...21 Figure 9-2 BAE Systems Sitex00 High Bandwidth...float) Subscribers Subscribers Preprocessor Channel 1 xout[256] Data File in Memory xout[256] S w i t c h High Pass Filter (IIR) xin[256] xout[256
The methods described in the report can be used with the modified N.R.C. version of the U.S.G.S. Solute Transport Model to predict the concentration of chemical parameters in a contaminant plume. The two volume report contains program documentation and user's manual. The program ...
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
Multi-Scale Characterization of Orthotropic Microstructures
2008-04-01
D. Valiveti, S. J. Harris, J. Boileau, A domain partitioning based pre-processor for multi-scale modelling of cast aluminium alloys , Modelling and...SUPPLEMENTARY NOTES Journal article submitted to Modeling and Simulation in Materials Science and Engineering. PAO Case Number: WPAFB 08-3362...element for charac- terization or simulation to avoid misleading predictions of macroscopic defor- mation, fracture, or transport behavior. Likewise
Minitrack tracking function description, volume 2
NASA Technical Reports Server (NTRS)
Englar, T. S.; Mango, S. A.; Roettcher, C. A.; Watters, D. L.
1973-01-01
The minitrack tracking function is described and specific operations are identified. The subjects discussed are: (1) preprocessor listing, (2) minitrack hardware, (3) system calibration, (4) quadratic listing, and (5) quadratic flow diagram. Detailed information is provided on the construction of the tracking system and its operation. The calibration procedures are supported by mathematical models to show the application of the computer programs.
Lee, Geunho; Lee, Hyun Beom; Jung, Byung Hwa; Nam, Hojung
2017-07-01
Mass spectrometry (MS) data are used to analyze biological phenomena based on chemical species. However, these data often contain unexpected duplicate records and missing values due to technical or biological factors. These 'dirty data' problems increase the difficulty of performing MS analyses because they lead to performance degradation when statistical or machine-learning tests are applied to the data. Thus, we have developed missing values preprocessor (mvp), an open-source software for preprocessing data that might include duplicate records and missing values. mvp uses the property of MS data in which identical chemical species present the same or similar values for key identifiers, such as the mass-to-charge ratio and intensity signal, and forms cliques via graph theory to process dirty data. We evaluated the validity of the mvp process via quantitative and qualitative analyses and compared the results from a statistical test that analyzed the original and mvp-applied data. This analysis showed that using mvp reduces problems associated with duplicate records and missing values. We also examined the effects of using unprocessed data in statistical tests and examined the improved statistical test results obtained with data preprocessed using mvp.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
Chia, Jia Li Pauline; Fuller-Tyszkiewicz, Matthew; Buck, Kimberly; Chamari, Karim; Richardson, Ben; Krug, Isabel
2018-04-23
Dietary restriction contributes to disordered eating (DE) behaviors and associated cognitions. However, it is unclear how these outcomes are impacted by dietary restriction for religious purposes, such as fasting observed by Muslims during Ramadan. Using ecological momentary assessment, this study assessed the impact of Ramadan fasting on DE behaviors and correlates. Muslim participants fasting during Ramadan (n = 28) and a control group of non-fasting participants (n = 74) completed baseline measures assessing demographic characteristics and eating pathology. A mobile phone application then prompted participants six times per day for seven days to self-report on dietary restriction efforts, body satisfaction, temptation to eat unhealthily, feelings of guilt or shame following food, and DE behaviors including bingeing, vomiting, and other purging behaviors (use of laxatives, diuretics, or diet pills). After controlling for eating pathology, multilevel modeling indicated that, as expected, the Ramadan fasting group spent significantly more time restricting food intake than the non-fasting group. The Ramadan fasting group also experienced significantly greater temptation to eat unhealthily than their non-fasting counterparts. However, this difference disappeared once models were adjusted for differences in time spent restricting food intake. There were no other significant differences between the groups on any DE variables. These findings suggest that while dietary restriction for health or appearance-related reasons is a known contributor to DE, dietary restriction for religious purposes, such as that observed during the practice of Ramadan, may not confer increased risk of DE symptoms. Copyright © 2018 Elsevier Ltd. All rights reserved.
Association between fast food purchasing and the local food environment.
Thornton, Lukar E; Kavanagh, A M
2012-12-03
In this study, an instrument was created to measure the healthy and unhealthy characteristics of food environments and investigate associations between the whole of the food environment and fast food consumption. In consultation with other academic researchers in this field, food stores were categorised to either healthy or unhealthy and weighted (between +10 and -10) by their likely contribution to healthy/unhealthy eating practices. A healthy and unhealthy food environment score (FES) was created using these weightings. Using a cross-sectional study design, multilevel multinomial regression was used to estimate the effects of the whole food environment on the fast food purchasing habits of 2547 individuals. Respondents in areas with the highest tertile of the healthy FES had a lower likelihood of purchasing fast food both infrequently and frequently compared with respondents who never purchased, however only infrequent purchasing remained significant when simultaneously modelled with the unhealthy FES (odds ratio (OR) 0.52; 95% confidence interval (CI) 0.32-0.83). Although a lower likelihood of frequent fast food purchasing was also associated with living in the highest tertile of the unhealthy FES, no association remained once the healthy FES was included in the models. In our binary models, respondents living in areas with a higher unhealthy FES than healthy FES were more likely to purchase fast food infrequently (OR 1.35; 95% CI 1.00-1.82) however no association was found for frequent purchasing. Our study provides some evidence to suggest that healthier food environments may discourage fast food purchasing.
Xu, Hongwei; Short, Susan E; Liu, Tao
2013-01-01
Background Mixed findings have been reported on the association between Western fast-food restaurants and body weight status. Results vary across study contexts and are sensitive to the samples, measures and methods used. Most studies have failed to examine the temporally dynamic associations between community exposure to fast-food restaurants and weight changes. Methods Bayesian hierarchical regressions are used to model changes in body mass index, waist-to-height ratio (WHtR) and waist-to-hip ratio (WHpR) as a function of changes in Western fast-food restaurants in 216 communities for more than 9000 Chinese adults followed up multiple times between 2000 and 2009. Results Number of Western fast-food restaurants is positively associated with subsequent increases in WHtR and WHpR among rural population. More fast-food restaurants are positively associated with a future increase in WHpR for urban women. Increased availability of fast food between two waves is related to increased WHtR for urban men over the same period. A past increase in number of fast-food restaurants is associated with subsequent increases in WHtR and WHpR for rural population. Conclusions The associations between community exposure to Western fast food and weight changes are temporally dynamic rather than static. Improved measures of exposure to community environment are needed to achieve more precise estimates and better understanding of these relationships. In light of the findings in this study and China’s rapid economic growth, further investigation and increased public health monitoring is warranted since Western fast food is likely to be more accessible and affordable in the near future. PMID:22923769
Xu, Hongwei; Short, Susan E; Liu, Tao
2013-03-01
Mixed findings have been reported on the association between Western fast-food restaurants and body weight status. Results vary across study contexts and are sensitive to the samples, measures and methods used. Most studies have failed to examine the temporally dynamic associations between community exposure to fast-food restaurants and weight changes. Bayesian hierarchical regressions are used to model changes in body mass index, waist-to-height ratio (WHtR) and waist-to-hip ratio (WHpR) as a function of changes in Western fast-food restaurants in 216 communities for more than 9000 Chinese adults followed up multiple times between 2000 and 2009. Number of Western fast-food restaurants is positively associated with subsequent increases in WHtR and WHpR among rural population. More fast-food restaurants are positively associated with a future increase in WHpR for urban women. Increased availability of fast food between two waves is related to increased WHtR for urban men over the same period. A past increase in number of fast-food restaurants is associated with subsequent increases in WHtR and WHpR for rural population. The associations between community exposure to Western fast food and weight changes are temporally dynamic rather than static. Improved measures of exposure to community environment are needed to achieve more precise estimates and better understanding of these relationships. In light of the findings in this study and China's rapid economic growth, further investigation and increased public health monitoring is warranted since Western fast food is likely to be more accessible and affordable in the near future.
NASA Technical Reports Server (NTRS)
Hirt, E. F.; Fox, G. L.
1982-01-01
Two specific NASTRAN preprocessors and postprocessors are examined. A postprocessor for dynamic analysis and a graphical interactive package for model generation and review of resuls are presented. A computer program that provides response spectrum analysis capability based on data from NASTRAN finite element model is described and the GIFTS system, a graphic processor to augment NASTRAN is introduced.
DefEX: Hands-On Cyber Defense Exercise for Undergraduate Students
2011-07-01
Injection, and 4) File Upload. Next, the students patched the associated flawed Perl and PHP Hypertext Preprocessor ( PHP ) code. Finally, students...underlying script. The Zora XSS vulnerability existed in a PHP file that echoed unfiltered user input back to the screen. To eliminate the...vulnerability, students filtered the input using the PHP htmlentities function and retested the code. The htmlentities function translates certain ambiguous
Field spectrometer (S191H) preprocessor tape quality test program design document
NASA Technical Reports Server (NTRS)
Campbell, H. M.
1976-01-01
Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.
NASA Astrophysics Data System (ADS)
Vara Vela, A. L.; Muñoz, A.; Lomas, A., Sr.; González, C. M.; Calderon, M. G.; Andrade, M. D. F.
2017-12-01
The Weather Research and Forecasting with Chemistry (WRF-Chem) community model have been widely used for the study of pollutants transport, formation of secondary pollutants, as well as for the assessment of air quality policies implementation. A key factor to improve the WRF-Chem air quality simulations over urban areas is the representation of anthropogenic emission sources. There are several tools that are available to assist users in creating their own emissions based on global emissions information (e.g. anthro_emiss, prep_chem_src); however, there is no single tool that will construct local emissions input datasets for any particular domain at this time. Because the official emissions pre-processor (emiss_v03) is designed to work with domains located over North America, this work presents the Another Assimilation System for WRF-Chem (AAS4WRF), a ncl based mass-conserving emissions pre-processor designed to create WRF-Chem ready emissions files from local inventories on a lat/lon projection. AAS4WRF is appropriate to scale emission rates from both surface and elevated sources, providing the users an alternative way to assimilate their emissions to WRF-Chem. Since it was successfully tested for the first time for the city of Lima, Peru in 2014 (managed by SENAMHI, the National Weather Service of the country), several studies on air quality modelling have applied this utility to convert their emissions to those required for WRF-Chem. Two case studies performed in the metropolitan areas of Sao Paulo and Manizales in Brazil and Colombia, respectively, are here presented in order to analyse the influence of using local or global emission inventories in the representation of regulated air pollutants such as O3 and PM2.5. Although AAS4WRF works with local emissions information at the moment, further work is being conducted to make it compatible with global/regional emissions data file format. The tool is freely available upon request to the corresponding author.
Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This paper employees the Multilevel Fast Multipole Method (MLFMM) feature of a commercial electromagnetic tool to model the fairing electromagnetic environment in the presence of an internal transmitter. This work is an extension of the perfect electric conductor model that was used to represent the bare aluminum internal fairing cavity. This fairing model includes typical acoustic blanketing commonly used in vehicle fairings. Representative material models within FEKO were successfully used to simulate the test case.
Svastisalee, Chalida; Pagh Pedersen, Trine; Schipperijn, Jasper; Jørgensen, Sanne Ellegaard; Holstein, Bjørn E; Krølner, Rikke
2016-02-01
We examined associations between fast-food intake and perceived and objective fast-food outlet exposure. Information from the Health Behaviours in School-aged Children Study was linked to fast-food outlets in seventy-five school neighbourhoods. We used multivariate multilevel logistic regression analyses to examine associations between at least weekly fast-food intake and perceived and objective fast-food outlet measures. Data represent 4642 adolescents (aged 11-15 years) in Denmark. Boys reporting two or more fast-food outlets had 34% higher odds consuming fast food at least weekly. We detected higher odds of at least weekly fast-food intake among 15-year-old 9th graders (ORall=1.74; 95% CI 1.40, 2.18; ORboys=2.20; 95% CI 1.66, 2.91; ORgirls=1.41; 95% CI 1.03, 1.92), Danish speakers (ORall=2.32; 95% CI 1.68, 3.19; ORboys=2.58; 95% CI 1.69, 3.93; ORgirls=2.37; 95% CI 1.46, 3.84) and those travelling 15 min or less to school (ORall=1.21; 95% CI 1.00, 1.46; ORgirls=1.44; 95% CI 1.08, 1.93) compared with 11-year-old 5th graders, non-Danish speakers and those with longer travel times. Boys from middle- (OR=1.28; 95% CI 1.00, 1.65) and girls from low-income families (OR=1.46; 95% CI 1.05, 2.04) had higher odds of at least weekly fast-food intake compared with those from high-income backgrounds. Girls attending schools with canteens (OR=1.47; 95% CI 1.00, 2.15) had higher odds of at least weekly fast-food intake than girls at schools without canteens. The present study demonstrates that perceived food outlets may impact fast-food intake in boys while proximity impacts intake in girls. Public health planning could target food environments that emphasize a better understanding of how adolescents use local resources.
DISPLAY3D. A Graphics Preprocessor for CHIEF
1990-12-27
graphics devices, the user may write a graphics program th.,.t can read DISPLAY3D output files, or use one of the commercial plotting packages...COMMON/NBPRTC/IRHSPT, NARSPT, NPTBLK FRQPT COMMON/NBPRTS/SYMTPT CHARACTER*3 SYMTPT DIMENSION CC(10), TRNS(3), IELTS (8,300) real xl(1000) ,yl(leee...C Prompt the user for filename. C--- ------------------------------------------------------- WRITE (6,1) ’Enter filename used in CID or
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.
1987-01-01
A methodology for writing parallel programs for shared memory multiprocessors has been formalized as an extension to the Fortran language and implemented as a macro preprocessor. The extended language is known as the Force, and this manual describes how to write Force programs and execute them on the Flexible Computer Corporation Flex/32, the Encore Multimax and the Sequent Balance computers. The parallel extension macros are described in detail, but knowledge of Fortran is assumed.
NASA Technical Reports Server (NTRS)
Mcmillan, J. D.
1976-01-01
A description of the input and output files and the data control cards for the altimeter residual computation (ARC) computer program is given. The program acts as the final altimeter preprocessor before the data is reformatted for external users. It calculates all parameters necessary for the computation of the altimeter observation residuals and the sea surface height. Mathematical models used for calculating tropospheric refraction, geoid height, tide height, ephemeris, and orbit geometry are described.
Linear quadratic Gaussian and feedforward controllers for the DSS-13 antenna
NASA Technical Reports Server (NTRS)
Gawronski, W. K.; Racho, C. S.; Mellstrom, J. A.
1994-01-01
The controller development and the tracking performance evaluation for the DSS-13 antenna are presented. A trajectory preprocessor, linear quadratic Gaussian (LQG) controller, feedforward controller, and their combination were designed, built, analyzed, and tested. The antenna exhibits nonlinear behavior when the input to the antenna and/or the derivative of this input exceeds the imposed limits; for slewing and acquisition commands, these limits are typically violated. A trajectory preprocessor was designed to ensure that the antenna behaves linearly, just to prevent nonlinear limit cycling. The estimator model for the LQG controller was identified from the data obtained from the field test. Based on an LQG balanced representation, a reduced-order LQG controller was obtained. The feedforward controller and the combination of the LQG and feedforward controller were also investigated. The performance of the controllers was evaluated with the tracking errors (due to following a trajectory) and the disturbance errors (due to the disturbances acting on the antenna). The LQG controller has good disturbance rejection properties and satisfactory tracking errors. The feedforward controller has small tracking errors but poor disturbance rejection properties. The combined LQG and feedforward controller exhibits small tracking errors as well as good disturbance rejection properties. However, the cost for this performance is the complexity of the controller.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
NASA Technical Reports Server (NTRS)
Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.
1995-01-01
In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.
Barriers to avoiding fast-food consumption in an environment supportive of unhealthy eating.
Thornton, Lukar E; Jeffery, Robert W; Crawford, David A
2013-12-01
To investigate factors (ability, motivation and the environment) that act as barriers to limiting fast-food consumption in women who live in an environment that is supportive of poor eating habits. Cross-sectional study using self-reports of individual-level data and objectively measured environmental data. Multilevel logistic regression was used to assess factors associated with frequency of fast-food consumption. Socio-economically disadvantaged areas in metropolitan Melbourne, Australia. Women (n 932) from thirty-two socio-economically disadvantaged neighbourhoods living within 3 km of six or more fast-food restaurants. Women were randomly sampled in 2007–2008 as part of baseline data collection for the Resilience for Eating and Activity Despite Inequality (READI) study. Consuming low amounts of fast food was less likely in women with lower perceived ability to shop for and cook healthy foods, lower frequency of family dining, lower family support for healthy eating, more women acquaintances who eat fast food regularly and who lived further from the nearest supermarket. When modelled with the other significant factors, a lower perceived shopping ability, mid levels of family support and living further from the nearest supermarket remained significant. Among those who did not perceive fruits and vegetables to be of high quality, less frequent fast-food consumption was further reduced for those with the lowest confidence in their shopping ability. Interventions designed to improve women's ability and opportunities to shop for healthy foods may be of value in making those who live in high-risk environments better able to eat healthily.
Thornton, Lukar E; Olstad, Dana Lee; Cerin, Ester; Ball, Kylie
2017-01-01
Objectives The residential neighbourhood fast-food environment has the potential to lead to increased levels of obesity by providing opportunities for residents to consume energy-dense products. This longitudinal study aimed to examine whether change in body mass index (BMI) differed dependent on major chain fast-food outlet availability among women residing in disadvantaged neighbourhoods. Setting Eighty disadvantaged neighbourhoods in Victoria, Australia. Participants Sample of 882 women aged 18–46 years at baseline (wave I: 2007/2008) who remained at the same residential location at all three waves (wave II: 2010/2011; wave III: 2012/2013) of the Resilience for Eating and Activity Despite Inequality study. Primary outcome BMI based on self-reported height and weight at each wave. Results There was no evidence of an interaction between time and the number of major chain fast-food outlets within 2 (p=0.88), 3 (p=0.66) or 5 km (p=0.24) in the multilevel models of BMI. Furthermore, there was no evidence of an interaction between time and change in availability at any distance and BMI. Conclusions Change in BMI was not found to differ by residential major chain fast-food outlet availability among Victorian women residing in disadvantaged neighbourhoods. It may be that exposure to fast-food outlets around other locations regularly visited influence change in BMI. Future research needs to consider what environments are the key sources for accessing and consuming fast food and how these relate to BMI and obesity risk. PMID:29042381
Association between fast food purchasing and the local food environment
Thornton, Lukar E; Kavanagh, A M
2012-01-01
Objective: In this study, an instrument was created to measure the healthy and unhealthy characteristics of food environments and investigate associations between the whole of the food environment and fast food consumption. Design and subjects: In consultation with other academic researchers in this field, food stores were categorised to either healthy or unhealthy and weighted (between +10 and −10) by their likely contribution to healthy/unhealthy eating practices. A healthy and unhealthy food environment score (FES) was created using these weightings. Using a cross-sectional study design, multilevel multinomial regression was used to estimate the effects of the whole food environment on the fast food purchasing habits of 2547 individuals. Results: Respondents in areas with the highest tertile of the healthy FES had a lower likelihood of purchasing fast food both infrequently and frequently compared with respondents who never purchased, however only infrequent purchasing remained significant when simultaneously modelled with the unhealthy FES (odds ratio (OR) 0.52; 95% confidence interval (CI) 0.32–0.83). Although a lower likelihood of frequent fast food purchasing was also associated with living in the highest tertile of the unhealthy FES, no association remained once the healthy FES was included in the models. In our binary models, respondents living in areas with a higher unhealthy FES than healthy FES were more likely to purchase fast food infrequently (OR 1.35; 95% CI 1.00–1.82) however no association was found for frequent purchasing. Conclusion: Our study provides some evidence to suggest that healthier food environments may discourage fast food purchasing. PMID:23208414
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita
2018-03-01
In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
Duran, Ana Clara; Diez Roux, Ana V; do Rosario DO Latorre, Maria; Jaime, Patricia C
2013-01-01
Differential access to healthy foods has been hypothesized to contribute to health disparities, but evidence from low and middle-income countries is still scarce. This study examines whether the access of healthy foods varies across store types and neighborhoods of different socioeconomic statuses (SES) in a large Brazilian city. A cross-sectional study was conducted in 2010–2011 across 52 census tracts. Healthy food access was measured by a comprehensive in-store data collection, summarized into two indexes developed for retail food stores (HFSI) and restaurants (HMRI). Descriptive analyses and multilevel models were used to examine associations of store type and neighborhood SES with healthy food access. Fast food restaurants were more likely to be located in low SES neighborhoods whereas supermarkets and full service restaurants were more likely to be found in higher SES neighborhoods. Multilevel analyses showed that both store type and neighborhood SES were independently associated with in-store food measures. We found differences in the availability of healthy food stores and restaurants in Sao Paulo city favoring middle and high SES neighborhoods. PMID:23747923
Pre- and postprocessing for reservoir simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, W.L.; Ingalls, L.J.; Prasad, S.J.
1991-05-01
This paper describes the functionality and underlying programing paradigms of Shell's simulator-related reservoir-engineering graphics system. THis system includes the simulation postprocessing programs Reservoir Display System (RDS) and Fast Reservoir Engineering Displays (FRED), a hypertext-like on-line documentation system (DOC), and a simulator input preprocessor (SIMPLSIM). RDS creates displays of reservoir simulation results. These displays represent the areal or cross-section distribution of computer reservoir parameters, such as pressure, phase saturation, or temperature. Generation of these images at real-time animation rates is discussed. FRED facilitates the creation of plot files from reservoir simulation output. The use of dynamic memory allocation, asynchronous I/O, amore » table-driven screen manager, and mixed-language (FORTRAN and C) programming are detailed. DOC is used to create and access on-line documentation for the pre-and post-processing programs and the reservoir simulators. DOC can be run by itself or can be accessed from within any other graphics or nongraphics application program. DOC includes a text editor, which is that basis for a reservoir simulation tutorial and greatly simplifies the preparation of simulator input. The use of sharable images, graphics, and the documentation file network are described. Finally, SIMPLSIM is a suite of program that uses interactive graphics in the preparation of reservoir description data for input into reservoir simulators. The SIMPLSIM user-interface manager (UIM) and its graphic interface for reservoir description are discussed.« less
Stress and deformation modeling of multiple rotary combustion engine trochoid housings
NASA Technical Reports Server (NTRS)
Lychuk, W. M.; Bradley, S. A.; Vilmann, C. R.; Passerello, C. E.; Lee, C.-M.
1986-01-01
This paper documents the development of the capability to produce finite element models of alternate trochoid housing configurations. The effort needed to produce these models is greatly reduced by the use of a newly developed specialized finite element preprocessor which is described. The results of static stress comparisons conducted on a Mazda trochoid housing are presented. Planned future development of this modeling capability to operational situations is also presented.
Extensions and Adjuncts to the BRL-COMGEOM Program
1974-08-01
m MAGIC Code, GIFT Code, Computer Simulation, Target Description, Geometric Modeling Techniques, Vulnerability Analysis 20...Arbitrary Quadric Surf ace.. 0Oo „<>. 7 III. BRITL: A GEOMETRY PREPROCESSOR PROGRAM FOR INPUT TO THE GIFT SYSTEM „ 0 18 A. Introduction <, „. ° 18 B...the BRL- GIFT code. The tasks completed under this contract and described in the report are: Ao The addition to the list of available body types
The FORCE - A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.
The FORCE: A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.
Learning Asset Technology Integration Support Tool Design Document
2010-05-11
language known as Hypertext Preprocessor ( PHP ) and by MySQL – a relational database management system that can also be used for content management. It...Requirements The LATIST tool will be implemented utilizing a WordPress platform with MySQL as the database. Also the LATIST system must effectively work... MySQL . When designing the LATIST system there are several considerations which must be accounted for in the working prototype. These include: • DAU
Lamb, Karen E; Thornton, Lukar E; Olstad, Dana Lee; Cerin, Ester; Ball, Kylie
2017-10-16
The residential neighbourhood fast-food environment has the potential to lead to increased levels of obesity by providing opportunities for residents to consume energy-dense products. This longitudinal study aimed to examine whether change in body mass index (BMI) differed dependent on major chain fast-food outlet availability among women residing in disadvantaged neighbourhoods. Eighty disadvantaged neighbourhoods in Victoria, Australia. Sample of 882 women aged 18-46 years at baseline (wave I: 2007/2008) who remained at the same residential location at all three waves (wave II: 2010/2011; wave III: 2012/2013) of the Resilience for Eating and Activity Despite Inequality study. BMI based on self-reported height and weight at each wave. There was no evidence of an interaction between time and the number of major chain fast-food outlets within 2 (p=0.88), 3 (p=0.66) or 5 km (p=0.24) in the multilevel models of BMI. Furthermore, there was no evidence of an interaction between time and change in availability at any distance and BMI. Change in BMI was not found to differ by residential major chain fast-food outlet availability among Victorian women residing in disadvantaged neighbourhoods. It may be that exposure to fast-food outlets around other locations regularly visited influence change in BMI. Future research needs to consider what environments are the key sources for accessing and consuming fast food and how these relate to BMI and obesity risk. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Using Voice Recognition Equipment to Run the Warfare Environmental Simulator (WES),
1981-03-01
simulations and models are often used. War games are a type of simulation frequently used by the military to evaluate C3 effectiveness. Through the use of a...to 162 words or short phrases (Appendix B). B. EQUIPMENT USED 1. Hardware Description [13] For the experiment a Threshold Model T600 discrete... Model T600 terminal used in this experiment con- sists of an analog speech preprocessor, microcomputer, CRT/keyboard unit, magnetic tape cartridge unit
Power Sources Focus Group - Evaluation of Plasma Gasification for Waste-to-Energy Conversion
2012-09-21
including paper , wood, plastic, food and agricultural waste. The system uses a shredder, dryer , and pelletizing preprocessor to fuel an in-house...limited information available, this paper does not attempt to determine the best way to use plasma in a gasifier. Instead, this paper makes general...Gasification Plasma gasification for the purposes of this paper includes any WTE system using plasma as part of the generation of syngas and/or cleanup
SMART: Security Measurements and Assuring Reliability Through Metrics Technology
2009-11-01
analyzing C / C ++ programs. C ++ is a terrible language for tool vendors to handle. There are only a handful of people in the world capable of writing an...input sanitizing , etc. These features aid in porting httpd to new platforms. For systems written in C / C ++, it appears that the use of preprocessor...DoD office) • DISTRIBUTION STATEMENT C . Distribution authorized to U.S. Government Agencies and their contractors (fill in reason) (date of
State-of-the-art software for window energy-efficiency rating and labeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arasteh, D.; Finlayson, E.; Huang, J.
1998-07-01
Measuring the thermal performance of windows in typical residential buildings is an expensive proposition. Not only is laboratory testing expensive, but each window manufacturer typically offers hundreds of individual products, each of which has different thermal performance properties. With over a thousand window manufacturers nationally, a testing-based rating system would be prohibitively expensive to the industry and to consumers. Beginning in the early 1990s, simulation software began to be used as part of a national program for rating window U-values. The rating program has since been expanded to include Solar Hear Gain Coefficients and is now being extended to annualmore » energy performance. This paper describes four software packages available to the public from Lawrence Berkeley National Laboratory (LBNL). These software packages are used to evaluate window thermal performance: RESFEN (for evaluating annual energy costs), WINDOW (for calculating a product`s thermal performance properties), THERM (a preprocessor for WINDOW that determines two-dimensional heat-transfer effects), and Optics (a preprocessor for WINDOW`s glass database). Software not only offers a less expensive means than testing to evaluate window performance, it can also be used during the design process to help manufacturers produce windows that will meet target specifications. In addition, software can show small improvements in window performance that might not be detected in actual testing because of large uncertainties in test procedures.« less
A User's Manual for ROTTILT Solver: Tiltrotor Fountain Flow Field Prediction
NASA Technical Reports Server (NTRS)
Tadghighi, Hormoz; Rajagopalan, R. Ganesh
1999-01-01
A CFD solver has been developed to provide the time averaged details of the fountain flow typical for tiltrotor aircraft in hover. This Navier-Stokes solver, designated as ROTTILT, assumes the 3-D fountain flowfield to be steady and incompressible. The theoretical background is described in this manual. In order to enable the rotor trim solution in the presence of tiltrotor aircraft components such as wing, nacelle, and fuselage, the solver is coupled with a set of trim routines which are highly efficient in CPU and suitable for CFD analysis. The Cartesian grid technique utilized provides the user with a unique capability for insertion or elimination of any components of the bodies considered for a given tiltrotor aircraft configuration. The flowfield associated with either a semi or full-span configuration can be computed through user options in the ROTTILT input file. Full details associated with the numerical solution implemented in ROTTILT and assumptions are presented. A description of input surface mesh topology is provided in the appendices along with a listing of all preprocessor programs. Input variable definitions and default values are provided for the V22 aircraft. Limited predicted results using the coupled ROTTILT/WOPWOP program for the V22 in hover are made and compared with measurement. To visualize the V22 aircraft and predictions, a preprocessor graphics program GNU-PLOT3D was used. This program is described and example graphic results presented.
SeqTrim: a high-throughput pipeline for pre-processing any type of sequence read
2010-01-01
Background High-throughput automated sequencing has enabled an exponential growth rate of sequencing data. This requires increasing sequence quality and reliability in order to avoid database contamination with artefactual sequences. The arrival of pyrosequencing enhances this problem and necessitates customisable pre-processing algorithms. Results SeqTrim has been implemented both as a Web and as a standalone command line application. Already-published and newly-designed algorithms have been included to identify sequence inserts, to remove low quality, vector, adaptor, low complexity and contaminant sequences, and to detect chimeric reads. The availability of several input and output formats allows its inclusion in sequence processing workflows. Due to its specific algorithms, SeqTrim outperforms other pre-processors implemented as Web services or standalone applications. It performs equally well with sequences from EST libraries, SSH libraries, genomic DNA libraries and pyrosequencing reads and does not lead to over-trimming. Conclusions SeqTrim is an efficient pipeline designed for pre-processing of any type of sequence read, including next-generation sequencing. It is easily configurable and provides a friendly interface that allows users to know what happened with sequences at every pre-processing stage, and to verify pre-processing of an individual sequence if desired. The recommended pipeline reveals more information about each sequence than previously described pre-processors and can discard more sequencing or experimental artefacts. PMID:20089148
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
Shieh, Bernard; Sabra, Karim G; Degertekin, F Levent
2016-11-01
A boundary element model provides great flexibility for the simulation of membrane-type micromachined ultrasonic transducers (MUTs) in terms of membrane shape, actuating mechanism, and array layout. Acoustic crosstalk is accounted for through a mutual impedance matrix that captures the primary crosstalk mechanism of dispersive-guided modes generated at the fluid-solid interface. However, finding the solution to the fully populated boundary element matrix equation using standard techniques requires computation time and memory usage that scales by the cube and by the square of the number of nodes, respectively, limiting simulation to a small number of membranes. We implement a solver with improved speed and efficiency through the application of a multilevel fast multipole algorithm (FMA). By approximating the fields of collections of nodes using multipole expansions of the free-space Green's function, an FMA solver can enable the simulation of hundreds of thousands of nodes while incurring an approximation error that is controllable. Convergence is drastically improved using a problem-specific block-diagonal preconditioner. We demonstrate the solver's capabilities by simulating a 32-element 7-MHz 1-D capacitive MUT (CMUT) phased array with 2880 membranes. The array is simulated using 233280 nodes for a very wide frequency band up to 50 MHz. For a simulation with 15210 nodes, the FMA solver performed ten times faster and used 32 times less memory than a standard solver based on LU decomposition. We investigate the effects of mesh density and phasing on the predicted array response and find that it is necessary to use about seven nodes over the width of the membrane to observe convergence of the solution-even below the first membrane resonance frequency-due to the influence of higher order membrane modes.
NASA Astrophysics Data System (ADS)
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Changes in the food environment over time: examining 40 years of data in the Framingham Heart Study.
James, Peter; Seward, Michael W; James O'Malley, A; Subramanian, S V; Block, Jason P
2017-06-24
Research has explored associations between diet, body weight, and the food environment; however, few studies have examined historical trends in food environments. In the Framingham Heart Study Offspring (N = 3321) and Omni (N = 447) cohorts, we created food environment metrics in four Massachusetts towns utilizing geocoded residential, workplace, and food establishment addresses from 1971 to 2008. We created multilevel models adjusted for age, sex, education, and census tract poverty to examine trends in home, workplace, and commuting food environments. Proximity to and density of supermarkets, fast-food, full service restaurants, convenience stores, and bakeries increased over time for residential, workplace, and commuting environments; exposure to grocery stores decreased. The greatest increase in access was for supermarkets, with residential distance to the closest supermarket 1406 m closer (95% CI 1303 m, 1508 m) by 2005-2008 than in 1971-1975. Although poorer census tracts had higher access to fast-food restaurants consistently across follow-up, this disparity dissipated over time, due to larger increases in proximity to fast-food in wealthier neighborhoods. Access to most food establishment types increased over time, with similar trends across home, workplace, and commuter environments.
A Comparison Study and Software Implementation of NORDA Ocean Models.
1980-10-08
L01?C07 ’EEEEElshhhh A COMPARISON STUDY AND SOFTWARE IMPLEMENTATION OF NORDA OCEAN MODELS J&IEA m/ MST* f-....~ cre mx IRSD~I?( J 300 Unicorn Park...34- NOO-79- 741 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA 6 WORK UNIT NUMBERSJAYC0, 300 Unicorn Park Drive...before another execution of the energetics program, move them back to disk. Note that the outputs of the preprocessor reside on disk, they should not be
Yang, L. H.; Brooks III, E. D.; Belak, J.
1992-01-01
A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
NASA Technical Reports Server (NTRS)
Chen, W. T.
1972-01-01
Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.
Hamano, Tsuyoshi; Li, Xinjun; Sundquist, Jan; Sundquist, Kristina
2017-01-01
The aim of this 6-year follow-up study was to examine whether neighbourhood accessibility to fast-food outlets was associated with diagnosed childhood obesity, after adjustment for neighbourhood- and individual-level socio-demographic factors. This 6-year follow-up study comprised 484,677 boys and 459,810 girls aged 0-14 years in Sweden. The follow-up period ran from January 1, 2005, until hospitalisation/out-patient treatment for obesity, death, emigration or the end of the study period on December 31, 2010. Multilevel logistic regression models (individual-level factors at the first level and neighbourhood-level factors at the second level) were used to calculate odds ratios (ORs) with 95% confidence intervals (95% CIs). We identified 6,968 obesity cases (3,878 boys and 3,090 girls) during the follow-up period. Higher odds of childhood obesity for those living in neighbourhoods with accessibility to fast-food outlets was observed (OR = 1.14, 95% CI = 1.07-1.22) that remained significant after adjustments (OR = 1.06, 95% CI = 1.00-1.13). This prospective nationwide study showed that the neighbourhood accessibility to fast-food outlets was independently associated with increased odds of diagnosed childhood obesity. This finding implicates that residential environments should be considered when developing health promotion programmes. © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.
Duran, Ana Clara; Diez Roux, Ana V; Latorre, Maria do Rosario D O; Jaime, Patricia Constante
2013-09-01
Differential access to healthy foods has been hypothesized to contribute to health disparities, but evidence from low and middle-income countries is still scarce. This study examines whether the access of healthy foods varies across store types and neighborhoods of different socioeconomic statuses (SES) in a large Brazilian city. A cross-sectional study was conducted in 2010-2011 across 52 census tracts. Healthy food access was measured by a comprehensive in-store data collection, summarized into two indexes developed for retail food stores (HFSI) and restaurants (HMRI). Descriptive analyses and multilevel models were used to examine associations of store type and neighborhood SES with healthy food access. Fast food restaurants were more likely to be located in low SES neighborhoods whereas supermarkets and full service restaurants were more likely to be found in higher SES neighborhoods. Multilevel analyses showed that both store type and neighborhood SES were independently associated with in-store food measures. We found differences in the availability of healthy food stores and restaurants in Sao Paulo city favoring middle and high SES neighborhoods. © 2013 Elsevier Ltd. All rights reserved.
Forbidden coherent transfer observed between two realizations of quasiharmonic spin systems
NASA Astrophysics Data System (ADS)
Bertaina, S.; Yue, G.; Dutoit, C.-E.; Chiorescu, I.
2017-07-01
The multilevel system
Generalized speed and cost rate in transitionless quantum driving
NASA Astrophysics Data System (ADS)
Xu, Zhen-Yu; You, Wen-Long; Dong, Yu-Li; Zhang, Chengjie; Yang, W. L.
2018-03-01
Transitionless quantum driving, also known as counterdiabatic driving, is a unique shortcut technique to adiabaticity, enabling a fast-forward evolution to the same target quantum states as those in the adiabatic case. However, as nothing is free, the fast evolution is obtained at the cost of stronger driving fields. Here, given the system initially gets prepared in equilibrium states, we construct relations between the dynamical evolution speed and the cost rate of transitionless quantum driving in two scenarios: one that preserves the transitionless evolution for a single energy eigenstate (individual driving), and the other that maintains all energy eigenstates evolving transitionlessly (collective driving). Remarkably, we find that individual driving may cost as much as collective driving, in contrast to the common belief that individual driving is more economical than collective driving in multilevel systems. We then present a potentially practical proposal to demonstrate the above phenomena in a three-level Landau-Zener model using the electronic spin system of a single nitrogen-vacancy center in diamond.
Tests with beam setup of the TileCal phase-II upgrade electronics
NASA Astrophysics Data System (ADS)
Reward Hlaluku, Dingane
2017-09-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run-2 value. The ATLAS Tile calorimeter plans to introduce a new readout architecture by completely replacing the back-end and front-end electronics for the High Luminosity LHC. The photomultiplier signals will be fully digitized and transferred for every bunch crossing to the off-detector Tile PreProcessor. The Tile PreProcessor will further provide preprocessed digital data to the first level of trigger with improved spatial granularity and energy resolution in contrast to the current analog trigger signals. A single super-drawer module commissioned with the phase-II upgrade electronics is to be inserted into the real detector to evaluate and qualify the new readout and trigger concepts in the overall ATLAS data acquisition system. This new super-drawer, so-called hybrid Demonstrator, must provide analog trigger signals for backward compatibility with the current system. This Demonstrator drawer has been inserted into a Tile calorimeter module prototype to evaluate the performance in the lab. In parallel, one more module has been instrumented with two other front-end electronics options based on custom ASICs (QIE and FATALIC) which are under evaluation. These two modules together with three other modules composed of the current system electronics were exposed to different particles and energies in three test-beam campaigns during 2015 and 2016.
Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.
2013-01-01
Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. PMID:24028942
Robinson, Thomas N; Matheson, Donna; Desai, Manisha; Wilson, Darrell M; Weintraub, Dana L; Haskell, William L; McClain, Arianna; McClure, Samuel; Banda, Jorge A; Sanders, Lee M; Haydel, K Farish; Killen, Joel D
2013-11-01
To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. © 2013 Elsevier Inc. All rights reserved.
Attard, S M; Herring, A H; Mayer-Davis, E J; Popkin, B M; Meigs, J B; Gordon-Larsen, P
2012-12-01
The purpose of this study was to examine the association between urbanisation-related factors and diabetes prevalence in China. Anthropometry, fasting blood glucose (FBG) and community-level data were collected for 7,741 adults (18-90 years) across 217 communities and nine provinces in the 2009 China Health and Nutrition Survey to examine diabetes (FBG ≥7.0 mmol/l or doctor diagnosis). Sex-stratified multilevel models, clustered at the community and province levels and controlling for individual-level age and household income were used to examine the association between diabetes and: (1) a multicomponent urbanisation measure reflecting overall modernisation and (2) 12 separate components of urbanisation (e.g., population density, employment, markets, infrastructure and social factors). Prevalent diabetes was higher in more-urbanised (men 12%; women 9%) vs less-urbanised (men 6%; women 5%) areas. In sex-stratified multilevel models adjusting for residential community and province, age and household income, there was a twofold higher diabetes prevalence in urban vs rural areas (men OR 2.02, 95% CI 1.47, 2.78; women, OR 1.94, 95% CI 1.35, 2.79). All urbanisation components were positively associated with diabetes, with variation across components (e.g. men, economic and income diversity, OR 1.42, 95% CI 1.20, 1.66; women, transportation infrastructure, OR 1.18, 95% CI 1.06, 1.32). Community-level variation in diabetes was comparatively greater for women (intraclass correlation [ICC] 0.03-0.05) vs men (ICC ≤0.01); province-level variation was greater for men (men 0.03-0.04; women 0.02). Diabetes prevention and treatment efforts are needed particularly in urbanised areas of China. Community economic factors, modern markets, communications and transportation infrastructure might present opportunities for such efforts.
CalSimHydro Tool - A Web-based interactive tool for the CalSim 3.0 Hydrology Prepropessor
NASA Astrophysics Data System (ADS)
Li, P.; Stough, T.; Vu, Q.; Granger, S. L.; Jones, D. J.; Ferreira, I.; Chen, Z.
2011-12-01
CalSimHydro, the CalSim 3.0 Hydrology Preprocessor, is an application designed to automate the various steps in the computation of hydrologic inputs for CalSim 3.0, a water resources planning model developed jointly by California State Department of Water Resources and United States Bureau of Reclamation, Mid-Pacific Region. CalSimHydro consists of a five-step FORTRAN based program that runs the individual models in succession passing information from one model to the next and aggregating data as required by each model. The final product of CalSimHydro is an updated CalSim 3.0 state variable (SV) DSS input file. CalSimHydro consists of (1) a Rainfall-Runoff Model to compute monthly infiltration, (2) a Soil moisture and demand calculator (IDC) that estimates surface runoff, deep percolation, and water demands for natural vegetation cover and various crops other than rice, (3) a Rice Water Use Model to compute the water demands, deep percolation, irrigation return flow, and runoff from precipitation for the rice fields, (4) a Refuge Water Use Model that simulates the ponding operations for managed wetlands, and (5) a Data Aggregation and Transfer Module to aggregate the outputs from the above modules and transfer them to the CalSim SV input file. In this presentation, we describe a web-based user interface for CalSimHydro using Google Earth Plug-In. The CalSimHydro tool allows users to - interact with geo-referenced layers of the Water Budget Areas (WBA) and Demand Units (DU) displayed over the Sacramento Valley, - view the input parameters of the hydrology preprocessor for a selected WBA or DU in a time series plot or a tabular form, - edit the values of the input parameters in the table or by downloading a spreadsheet of the selected parameter in a selected time range, - run the CalSimHydro modules in the backend server and notify the user when the job is done, - visualize the model output and compare it with a base run result, - download the output SV file to be used to run CalSim 3.0. The CalSimHydro tool streamlines the complicated steps to configure and run the hydrology preprocessor by providing a user-friendly visual interface and back-end services to validate user inputs and manage the model execution. It is a powerful addition to the new CalSim 3.0 system.
NASA Astrophysics Data System (ADS)
Sun, Yuan; Bhattacherjee, Anol
2011-11-01
Information technology (IT) usage within organisations is a multi-level phenomenon that is influenced by individual-level and organisational-level variables. Yet, current theories, such as the unified theory of acceptance and use of technology, describe IT usage as solely an individual-level phenomenon. This article postulates a model of organisational IT usage that integrates salient organisational-level variables such as user training, top management support and technical support within an individual-level model to postulate a multi-level model of IT usage. The multi-level model was then empirically validated using multi-level data collected from 128 end users and 26 managers in 26 firms in China regarding their use of enterprise resource planning systems and analysed using the multi-level structural equation modelling (MSEM) technique. We demonstrate the utility of MSEM analysis of multi-level data relative to the more common structural equation modelling analysis of single-level data and show how single-level data can be aggregated to approximate multi-level analysis when multi-level data collection is not possible. We hope that this article will motivate future scholars to employ multi-level data and multi-level analysis for understanding organisational phenomena that are truly multi-level in nature.
He, De Fu; Ma, Dong Liang; Tang, Yong Cheng; Engel, Jerome; Bragin, Anatol; Tang, Feng Ru
2010-01-01
The goal of this study was to examine morpho-physiological changes in the dorsal subiculum network in the mouse model of temporal lobe epilepsy using extracellular recording, juxtacellular and immunofluorescence double labeling, and anterograde tracing methods. A significant loss of total dorsal subicular neurons, particularly calbindin, parvalbumin (PV), and immunopositive interneurons, was found at 2 months after pilocarpine-induced status epilepticus (SE). However, the sprouting of axons from lateral entorhinal cortex (LEnt) was observed to contact with surviving subicular neurons. These neurons had two predominant discharge patterns: bursting and fast irregular discharges. The bursting neurons were mainly pyramidal cells, and their dendritic spine density and bursting discharge rates were increased significantly in SE mice compared to the control group. Fast irregular discharge neurons were PV-immunopositive interneurons, and had less dendritic spines in SE mice when compared to control mice. When LEnt was stimulated, bursting and fast irregular discharge neurons had much shorter latency and stronger excitatory response in SE mice compared to the control group. Our results illustrate that morpho-physiological changes in the dorsal subiculum could be part of a multilevel pathological network that occurs simultaneously in many brain areas to contribute to the generation of epileptiform activity. PMID:19298597
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
NASA Technical Reports Server (NTRS)
Smith, W. W.
1973-01-01
A Langley Research Center version of NASTRAN Level 15.1.0 designed to provide the analyst with an added tool for debugging massive NASTRAN input data is described. The program checks all NASTRAN input data cards and displays on a CRT the graphic representation of the undeformed structure. In addition, the program permits the display and alteration of input data and allows reexecution without physically resubmitting the job. Core requirements on the CDC 6000 computer are approximately 77,000 octal words of central memory.
NASA Technical Reports Server (NTRS)
Hua, Chongyu; Volakis, John L.
1990-01-01
AUTOMESH-2D is a computer program specifically designed as a preprocessor for the scattering analysis of two dimensional bodies by the finite element method. This program was developed due to a need for reproducing the effort required to define and check the geometry data, element topology, and material properties. There are six modules in the program: (1) Parameter Specification; (2) Data Input; (3) Node Generation; (4) Element Generation; (5) Mesh Smoothing; and (5) Data File Generation.
The establishment and use of the point source catalog database of the 2MASS near infrared survey
NASA Astrophysics Data System (ADS)
Gao, Y. F.; Shan, H. G.; Cheng, D.
2003-02-01
The 2MASS near infrared survey project is introduced briefly. The 2MASS point sources catalog (2MASS PSC) database and the network query system are established by using the PHP Hypertext Preprocessor and MySQL database server. By using the system, one can not only query information of sources listed in the catalog, but also draw the plots related. Moreover, after the 2MASS data are diagnosed , some research fields which can be benefited from this database are suggested.
1989-04-13
19 5.3 The Solution, BSM2 , BSM3 . ...................................... 21 6. Description of test example...are modified for the boundary conditions. The sections on the preprocessor subroutine BSM1 and the solution subroutines BSM2 , BSM3 may be skipped by...interior row j = N-1 to the solution error C5 on the second row j = IE(2) of the last block, so that P3 = C5 R31 (5.18) 20 5.3 The Solution. BSM2
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
Advanced graphical user interface for multi-physics simulations using AMST
NASA Astrophysics Data System (ADS)
Hoffmann, Florian; Vogel, Frank
2017-07-01
Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Kuzhel, Sergii
2010-10-01
Gauged \\ {P}\\ {T} quantum mechanics (PTQM) and corresponding Krein space setups are studied. For models with constant non-Abelian gauge potentials and extended parity inversions compact and noncompact Lie group components are analyzed via Cartan decompositions. A Lie-triple structure is found and an interpretation as \\ {P}\\ {T}-symmetrically generalized Jaynes-Cummings model is possible with close relation to recently studied cavity QED setups with transmon states in multilevel artificial atoms. For models with Abelian gauge potentials a hidden Clifford algebra structure is found and used to obtain the fundamental symmetry of Krein space-related J-self-adjoint extensions for PTQM setups with ultra-localized potentials.
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
Synergistic High Charge-Storage Capacity for Multi-level Flexible Organic Flash Memory
NASA Astrophysics Data System (ADS)
Kang, Minji; Khim, Dongyoon; Park, Won-Tae; Kim, Jihong; Kim, Juhwan; Noh, Yong-Young; Baeg, Kang-Jun; Kim, Dong-Yu
2015-07-01
Electret and organic floating-gate memories are next-generation flash storage mediums for printed organic complementary circuits. While each flash memory can be easily fabricated using solution processes on flexible plastic substrates, promising their potential for on-chip memory organization is limited by unreliable bit operation and high write loads. We here report that new architecture could improve the overall performance of organic memory, and especially meet high storage for multi-level operation. Our concept depends on synergistic effect of electrical characterization in combination with a polymer electret (poly(2-vinyl naphthalene) (PVN)) and metal nanoparticles (Copper). It is distinguished from mostly organic nano-floating-gate memories by using the electret dielectric instead of general tunneling dielectric for additional charge storage. The uniform stacking of organic layers including various dielectrics and poly(3-hexylthiophene) (P3HT) as an organic semiconductor, followed by thin-film coating using orthogonal solvents, greatly improve device precision despite easy and fast manufacture. Poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] as high-k blocking dielectric also allows reduction of programming voltage. The reported synergistic organic memory devices represent low power consumption, high cycle endurance, high thermal stability and suitable retention time, compared to electret and organic nano-floating-gate memory devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
Multilevel built environment features and individual odds of overweight and obesity in Utah
Xu, Yanqing; Wen, Ming; Wang, Fahui
2015-01-01
Based on the data from the Behavioral Risk Factor Surveillance System (BRFSS) in 2007, 2009 and 2011 in Utah, this research uses multilevel modeling (MLM) to examine the associations between neighborhood built environments and individual odds of overweight and obesity after controlling for individual risk factors. The BRFSS data include information on 21,961 individuals geocoded to zip code areas. Individual variables include BMI (body mass index) and socio-demographic attributes such as age, gender, race, marital status, education attainment, employment status, and whether an individual smokes. Neighborhood built environment factors measured at both zip code and county levels include street connectivity, walk score, distance to parks, and food environment. Two additional neighborhood variables, namely the poverty rate and urbanicity, are also included as control variables. MLM results show that at the zip code level, poverty rate and distance to parks are significant and negative covariates of the odds of overweight and obesity; and at the county level, food environment is the sole significant factor with stronger fast food presence linked to higher odds of overweight and obesity. These findings suggest that obesity risk factors lie in multiple neighborhood levels and built environment features need to be defined at a neighborhood size relevant to residents' activity space. PMID:26251559
Synergistic High Charge-Storage Capacity for Multi-level Flexible Organic Flash Memory.
Kang, Minji; Khim, Dongyoon; Park, Won-Tae; Kim, Jihong; Kim, Juhwan; Noh, Yong-Young; Baeg, Kang-Jun; Kim, Dong-Yu
2015-07-23
Electret and organic floating-gate memories are next-generation flash storage mediums for printed organic complementary circuits. While each flash memory can be easily fabricated using solution processes on flexible plastic substrates, promising their potential for on-chip memory organization is limited by unreliable bit operation and high write loads. We here report that new architecture could improve the overall performance of organic memory, and especially meet high storage for multi-level operation. Our concept depends on synergistic effect of electrical characterization in combination with a polymer electret (poly(2-vinyl naphthalene) (PVN)) and metal nanoparticles (Copper). It is distinguished from mostly organic nano-floating-gate memories by using the electret dielectric instead of general tunneling dielectric for additional charge storage. The uniform stacking of organic layers including various dielectrics and poly(3-hexylthiophene) (P3HT) as an organic semiconductor, followed by thin-film coating using orthogonal solvents, greatly improve device precision despite easy and fast manufacture. Poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] as high-k blocking dielectric also allows reduction of programming voltage. The reported synergistic organic memory devices represent low power consumption, high cycle endurance, high thermal stability and suitable retention time, compared to electret and organic nano-floating-gate memory devices.
Multilevel Interventions: Measurement and Measures
Charns, Martin P.; Alligood, Elaine C.; Benzer, Justin K.; Burgess, James F.; Mcintosh, Nathalie M.; Burness, Allison; Partin, Melissa R.; Clauser, Steven B.
2012-01-01
Background Multilevel intervention research holds the promise of more accurately representing real-life situations and, thus, with proper research design and measurement approaches, facilitating effective and efficient resolution of health-care system challenges. However, taking a multilevel approach to cancer care interventions creates both measurement challenges and opportunities. Methods One-thousand seventy two cancer care articles from 2005 to 2010 were reviewed to examine the state of measurement in the multilevel intervention cancer care literature. Ultimately, 234 multilevel articles, 40 involving cancer care interventions, were identified. Additionally, literature from health services, social psychology, and organizational behavior was reviewed to identify measures that might be useful in multilevel intervention research. Results The vast majority of measures used in multilevel cancer intervention studies were individual level measures. Group-, organization-, and community-level measures were rarely used. Discussion of the independence, validity, and reliability of measures was scant. Discussion Measurement issues may be especially complex when conducting multilevel intervention research. Measurement considerations that are associated with multilevel intervention research include those related to independence, reliability, validity, sample size, and power. Furthermore, multilevel intervention research requires identification of key constructs and measures by level and consideration of interactions within and across levels. Thus, multilevel intervention research benefits from thoughtful theory-driven planning and design, an interdisciplinary approach, and mixed methods measurement and analysis. PMID:22623598
Pastor, Dena A; Lazowski, Rory A
2018-01-01
The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
Neighbourhood food and physical activity environments in England, UK: does ethnic density matter?
2012-01-01
Background In England, obesity is more common in some ethnic minority groups than in Whites. This study examines the relationship between ethnic concentration and access to fast food outlets, supermarkets and physical activity facilities. Methods Data on ethnic concentration, fast food outlets, supermarkets and physical activity facilities were obtained at the lower super output area (LSOA) (population average of 1500). Poisson multilevel modelling was used to examine the association between own ethnic concentration and facilities, adjusted for area deprivation, urbanicity, population size and clustering of LSOAs within local authority areas. Results There was a higher proportion of ethnic minorities residing in areas classified as most deprived. Fast food outlets and supermarkets were more common and outdoor physical activity facilities were less common in most than least deprived areas. A gradient was not observed for the relationship between indoor physical activity facilities and area deprivation quintiles. In contrast to White British, increasing ethnic minority concentration was associated with increasing rates of fast food outlets. Rate ratios comparing rates of fast food outlets in high with those in low level of ethnic concentration ranged between 1.28, 95% confidence interval 1.06-1.55 (Bangladeshi) and 2.62, 1.46-4.70 (Chinese). Similar to White British, however, increasing ethnic minority concentration was associated with increasing rate of supermarkets and indoor physical activity facilities. Outdoor physical activity facilities were less likely to be in high than low ethnic concentration areas for some minority groups. Conclusions Overall, ethnic minority concentration was associated with a mixture of both advantages and disadvantages in the provision of food outlets and physical activity facilities. These issues might contribute to ethnic differences in food choices and engagement in physical activity. PMID:22709527
Neighbourhood food and physical activity environments in England, UK: does ethnic density matter?
Molaodi, Oarabile R; Leyland, Alastair H; Ellaway, Anne; Kearns, Ade; Harding, Seeromanie
2012-06-18
In England, obesity is more common in some ethnic minority groups than in Whites. This study examines the relationship between ethnic concentration and access to fast food outlets, supermarkets and physical activity facilities. Data on ethnic concentration, fast food outlets, supermarkets and physical activity facilities were obtained at the lower super output area (LSOA) (population average of 1500). Poisson multilevel modelling was used to examine the association between own ethnic concentration and facilities, adjusted for area deprivation, urbanicity, population size and clustering of LSOAs within local authority areas. There was a higher proportion of ethnic minorities residing in areas classified as most deprived. Fast food outlets and supermarkets were more common and outdoor physical activity facilities were less common in most than least deprived areas. A gradient was not observed for the relationship between indoor physical activity facilities and area deprivation quintiles. In contrast to White British, increasing ethnic minority concentration was associated with increasing rates of fast food outlets. Rate ratios comparing rates of fast food outlets in high with those in low level of ethnic concentration ranged between 1.28, 95% confidence interval 1.06-1.55 (Bangladeshi) and 2.62, 1.46-4.70 (Chinese). Similar to White British, however, increasing ethnic minority concentration was associated with increasing rate of supermarkets and indoor physical activity facilities. Outdoor physical activity facilities were less likely to be in high than low ethnic concentration areas for some minority groups. Overall, ethnic minority concentration was associated with a mixture of both advantages and disadvantages in the provision of food outlets and physical activity facilities. These issues might contribute to ethnic differences in food choices and engagement in physical activity.
Dunn, Erin C.; Masyn, Katherine E.; Yudron, Monica; Jones, Stephanie M.; Subramanian, S.V.
2014-01-01
The observation that features of the social environment, including family, school, and neighborhood characteristics, are associated with individual-level outcomes has spurred the development of dozens of multilevel or ecological theoretical frameworks in epidemiology, public health, psychology, and sociology, among other disciplines. Despite the widespread use of such theories in etiological, intervention, and policy studies, challenges remain in bridging multilevel theory and empirical research. This paper set out to synthesize these challenges and provide specific examples of methodological and analytical strategies researchers are using to gain a more nuanced understanding of the social determinants of psychiatric disorders, with a focus on children’s mental health. To accomplish this goal, we begin by describing multilevel theories, defining their core elements, and discussing what these theories suggest is needed in empirical work. In the second part, we outline the main challenges researchers face in translating multilevel theory into research. These challenges are presented for each stage of the research process. In the third section, we describe two methods being used as alternatives to traditional multilevel modeling techniques to better bridge multilevel theory and multilevel research. These are: (1) multilevel factor analysis and multilevel structural equation modeling; and (2) dynamic systems approaches. Through its review of multilevel theory, assessment of existing strategies, and examination of emerging methodologies, this paper offers a framework to evaluate and guide empirical studies on the social determinants of child psychiatric disorders as well as health across the lifecourse. PMID:24469555
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Majid, Abdul; Ali, Safdar; Iqbal, Mubashar; Kausar, Nabeela
2014-03-01
This study proposes a novel prediction approach for human breast and colon cancers using different feature spaces. The proposed scheme consists of two stages: the preprocessor and the predictor. In the preprocessor stage, the mega-trend diffusion (MTD) technique is employed to increase the samples of the minority class, thereby balancing the dataset. In the predictor stage, machine-learning approaches of K-nearest neighbor (KNN) and support vector machines (SVM) are used to develop hybrid MTD-SVM and MTD-KNN prediction models. MTD-SVM model has provided the best values of accuracy, G-mean and Matthew's correlation coefficient of 96.71%, 96.70% and 71.98% for cancer/non-cancer dataset, breast/non-breast cancer dataset and colon/non-colon cancer dataset, respectively. We found that hybrid MTD-SVM is the best with respect to prediction performance and computational cost. MTD-KNN model has achieved moderately better prediction as compared to hybrid MTD-NB (Naïve Bayes) but at the expense of higher computing cost. MTD-KNN model is faster than MTD-RF (random forest) but its prediction is not better than MTD-RF. To the best of our knowledge, the reported results are the best results, so far, for these datasets. The proposed scheme indicates that the developed models can be used as a tool for the prediction of cancer. This scheme may be useful for study of any sequential information such as protein sequence or any nucleic acid sequence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Noé, Pierre; Vallée, Christophe; Hippert, Françoise; Fillot, Frédéric; Raty, Jean-Yves
2018-01-01
Chalcogenide phase-change materials (PCMs), such as Ge-Sb-Te alloys, have shown outstanding properties, which has led to their successful use for a long time in optical memories (DVDs) and, recently, in non-volatile resistive memories. The latter, known as PCM memories or phase-change random access memories (PCRAMs), are the most promising candidates among emerging non-volatile memory (NVM) technologies to replace the current FLASH memories at CMOS technology nodes under 28 nm. Chalcogenide PCMs exhibit fast and reversible phase transformations between crystalline and amorphous states with very different transport and optical properties leading to a unique set of features for PCRAMs, such as fast programming, good cyclability, high scalability, multi-level storage capability, and good data retention. Nevertheless, PCM memory technology has to overcome several challenges to definitively invade the NVM market. In this review paper, we examine the main technological challenges that PCM memory technology must face and we illustrate how new memory architecture, innovative deposition methods, and PCM composition optimization can contribute to further improvements of this technology. In particular, we examine how to lower the programming currents and increase data retention. Scaling down PCM memories for large-scale integration means the incorporation of the PCM into more and more confined structures and raises materials science issues in order to understand interface and size effects on crystallization. Other materials science issues are related to the stability and ageing of the amorphous state of PCMs. The stability of the amorphous phase, which determines data retention in memory devices, can be increased by doping the PCM. Ageing of the amorphous phase leads to a large increase of the resistivity with time (resistance drift), which has up to now hindered the development of ultra-high multi-level storage devices. A review of the current understanding of all these issues is provided from a materials science point of view.
Feature-fused SSD: fast detection for small objects
NASA Astrophysics Data System (ADS)
Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian
2018-04-01
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1982-01-01
A fast computer program, GRID3C, was developed to generate multilevel three dimensional, C type, periodic, boundary conforming grids for the calculation of realistic turbomachinery and propeller flow fields. The technique is based on two analytic functions that conformally map a cascade of semi-infinite slits to a cascade of doubly infinite strips on different Riemann sheets. Up to four consecutively refined three dimensional grids are automatically generated and permanently stored on four different computer tapes. Grid nonorthogonality is introduced by a separate coordinate shearing and stretching performed in each of three coordinate directions. The grids are easily clustered closer to the blade surface, the trailing and leading edges and the hub or shroud regions by changing appropriate input parameters. Hub and duct (or outer free boundary) have different axisymmetric shapes. A vortex sheet of arbitrary thickness emanating smoothly from the blade trailing edge is generated automatically by GRID3C. Blade cross sectional shape, chord length, twist angle, sweep angle, and dihedral angle can vary in an arbitrary smooth fashion in the spanwise direction.
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1994-01-01
A fast algorithm has been developed for accurately generating boundary-conforming, three-dimensional consecutively refined computational grids applicable to arbitrary wing-body and axial turbomachinery geometries. This algorithm has been incorporated into the GRID3O computer program. The method employed in GRID3O is based on using an analytic function to generate two-dimensional grids on a number of coaxial axisymmetric surfaces positioned between the centerbody and the outer radial boundary. These grids are of the O-type and are characterized by quasi-orthogonality, geometric periodicity, and an adequate resolution throughout the flow field. Because the built-in nonorthogonal coordinate stretching and shearing cause the grid lines leaving the blade or wing trailing-edge to end at downstream infinity, use of the generated grid simplifies the numerical treatment of three-dimensional trailing vortex sheets. The GRID3O program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 450K of 8 bit bytes. The GRID3O program was developed in 1981.
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diske
NASA Astrophysics Data System (ADS)
Kosovic, B.; Bryan, G. H.; Haupt, S. E.
2012-12-01
Schwartz et al. (2010) recently reported that the total gross energy-generating offshore wind resource in the United States in waters less than 30m deep is approximately 1000 GW. Estimated offshore generating capacity is thus equivalent to the current generating capacity in the United States. Offshore wind power can therefore play important role in electricity production in the United States. However, most of this resource is located along the East Coast of the United States and in the Gulf of Mexico, areas frequently affected by tropical cyclones including hurricanes. Hurricane strength winds, associated shear and turbulence can affect performance and structural integrity of wind turbines. In a recent study Rose et al. (2012) attempted to estimate the risk to offshore wind turbines from hurricane strength winds over a lifetime of a wind farm (i.e. 20 years). According to Rose et al. turbine tower buckling has been observed in typhoons. They concluded that there is "substantial risk that Category 3 and higher hurricanes can destroy half or more of the turbines at some locations." More robust designs including appropriate controls can mitigate the risk of wind turbine damage. To develop such designs good estimates of turbine loads under hurricane strength winds are essential. We use output from a large-eddy simulation of a hurricane to estimate shear and turbulence intensity over first couple of hundred meters above sea surface. We compute power spectra of three velocity components at several distances from the eye of the hurricane. Based on these spectra analytical spectral forms are developed and included in TurbSim, a stochastic inflow turbulence code developed by the National Renewable Energy Laboratory (NREL, http://wind.nrel.gov/designcodes/preprocessors/turbsim/). TurbSim provides a numerical simulation including bursts of coherent turbulence associated with organized turbulent structures. It can generate realistic flow conditions that an operating turbine would encounter under hurricane strength winds. These flow fields can be used to estimate wind turbine loads and responses with AeroDyn (http://wind.nrel.gov/designcodes/simulators/aerodyn/) and FAST (http://wind.nrel.gov/designcodes/simulators/fast/) codes also developed by NREL.
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
A multilevel preconditioner for domain decomposition boundary systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.
1991-12-11
In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.
Burns, Cate; Bentley, Rebecca; Thornton, Lukar; Kavanagh, Anne
2015-01-01
To examine the associations between financial, physical and transport conditions that may restrict food access (which we define as food security indicators) and the purchase of fast foods and nutritious staples such as bread and milk. Multilevel logistic and multinomial regression analysis of cross-sectional survey data to assess associations between the three indicators of food insecurity and household food shopping adjusted for sociodemographic and socio-economic variables. Random selection of households (n 3995) from fifty Census Collector Districts in Melbourne, Australia, in 2003. The main food shoppers in each household (n 2564). After adjustment for confounders, analysis showed that a greater likelihood of purchasing chain-brand fast food on a weekly basis compared with never was associated with running out of money to buy food (OR = 1·59; 95 % CI 1·08, 2·34) and reporting difficulties lifting groceries (OR = 1·77; 95 % CI 1·23, 2·54). Respondents without regular access to a car to do food shopping were less likely to purchase bread types considered more nutritious than white bread (OR = 0·75; 95 % CI 0·59, 0·95) and milk types considered more nutritious than full-cream milk (OR = 0·62; 95 % CI 0·47, 0·81). The food insecurity indicators were not associated with the purchasing of fruits, vegetables or non-chain fast food. Householders experiencing financial and physical barriers were more likely to frequently purchase chain fast foods while limited access to a car resulted in a lower likelihood that the nutritious options were purchased for two core food items (bread and milk). Policies and interventions that improve financial access to food and lessen the effect of physical limitations to carrying groceries may reduce the purchasing of fast foods. Further research is required on food sourcing and dietary quality among those with food access restrictions.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.
Kobayashi, Masaki
2017-01-01
Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.
Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to electromagnetically sensitive spacecraft. This study employs the multilevel fast multipole method (MLFMM) from a commercial electromagnetic tool, FEKO, to model the fairing electromagnetic environment in the presence of an internal transmitter with improved accuracy over industry applied techniques. This fairing model includes material properties representative of acoustic blanketing commonly used in vehicles. Equivalent surface material models within FEKO were successfully applied to simulate the test case. Finally, a simplified model is presented using Nicholson Ross Weir derived blanket material properties. These properties are implemented with the coated metal option to reduce the model to one layer within the accuracy of the original three layer simulation.
Mössbauer and X-ray study of biodegradation of 57Fe3 O 4 magnetic nanoparticles in rat brain
NASA Astrophysics Data System (ADS)
Gabbasov, R. R.; Cherepanov, V. M.; Chuev, M. A.; Lomov, A. A.; Mischenko, I. N.; Nikitin, M. P.; Polikarpov, M. A.; Panchenko, V. Y.
2016-12-01
Biodegradation of a 57Fe3 O 4 - based dextran - stabilized ferrofluid in the ventricular cavities of the rat brain was studied by X-ray diffraction and Mössbauer spectroscopy. A two-step process of biodegradation, consisting of fast disintegration of the initial composite magnetic beads into separate superparamagnetic nanoparticles and subsequent slow dissolution of the nanoparticles has been found. Joint fitting of the couples of Mössbauer spectra measured at different temperatures in the formalism of multi-level relaxation model with one set of fitting parameters, allowed us to measure concentration of exogenous iron in the rat brain as a function of time after the injection of nanoparticles.
NASA Technical Reports Server (NTRS)
Ross, C.; Williams, G. P. W., Jr.
1975-01-01
The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.
Analysis of dangerous area of single berth oil tanker operations based on CFD
NASA Astrophysics Data System (ADS)
Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin
2018-04-01
Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.
Molecular Electronic Devices Based On Electrooptical Behavior Of Heme-Like Molecules
NASA Astrophysics Data System (ADS)
Simic-Glavaski, B.
1986-02-01
This paper discusses application of the electrically modulated and unusually strong Raman emitted light produced by an adsorbed monolayer of phthalocyanine molecules on silver electrode or silver bromide substrates and on neural membranes. The analysis of electronic energy levels in semiconducting silver bromide and the adsorbed phthalocyanine molecules suggests a lasing mechanism as a possible origin of the high enhancement factor in surface enhanced Raman scattering. Electrically modulated Raman scattering may be used as a carrier of information which is drawn fran the fast intramolecular electron transfer aN,the multiplicity of quantum wells in phthalocyanine molecules. Fast switching times on the order of 10-13 seconds have been measured at room temperature. Multilevel and multioutput optical signals have also been obtained fran such an electrically modulated adsorbed monolayer of phthalocyanine molecules which can be precisely addressed and interrogated. This may be of practical use to develop Nlecular electronic devices with high density memory and fast parallel processing systems with a typical 1020 gate Hz/cm2 capacity at room temperature for use in optical computers. The paper also discusses the electrooptical modulation of Raman signals obtained from adsorbed bio-compatible phthalocyanine molecules on nerve membranes. This optical probe of neural systems can be used in studies of complex information processing in neural nets and provides a possible method for interfacing natural and man-made information processing devices.
Multilevel structural equation models for assessing moderation within and across levels of analysis.
Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J
2016-06-01
Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multilevel SEM Strategies for Evaluating Mediation in Three-Level Data
ERIC Educational Resources Information Center
Preacher, Kristopher J.
2011-01-01
Strategies for modeling mediation effects in multilevel data have proliferated over the past decade, keeping pace with the demands of applied research. Approaches for testing mediation hypotheses with 2-level clustered data were first proposed using multilevel modeling (MLM) and subsequently using multilevel structural equation modeling (MSEM) to…
Formulation and Application of the Generalized Multilevel Facets Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chih-Yu
2007-01-01
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
Kim, Eun Sook; Cao, Chunhua
2015-01-01
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
Food Choice and Nutrition: A Social Psychological Perspective.
Hardcastle, Sarah J; Thøgersen-Ntoumani, Cecilie; Chatzisarantis, Nikos L D
2015-10-01
In this Special Issue, entitled "Food choice and Nutrition: A Social Psychological Perspective", three broad themes have been identified: (1) social and environmental influences on food choice; (2) psychological influences on eating behaviour; and (3) eating behaviour profiling.The studies that addressed the social and environmental influences indicated that further research would do well to promote positive food choices rather than reduce negative food choices; promote the reading and interpretation of food labels and find ways to effectively market healthy food choices through accessibility, availability and presentation. The studies on psychological influences found that intentions, perceived behavioural control, and confidence were predictors of healthy eating. Given the importance of psychological factors, such as perceived behavioural control and self-efficacy, healthy eating interventions should reduce barriers to healthy eating and foster perceptions of confidence to consume a healthy diet. The final theme focused on the clustering of individuals according to eating behaviour. Some "types" of individuals reported more frequent consumption of fast foods, ready meals or convenience meals or greater levels of disinhibitiona nd less control over food cravings. Intervention designs which make use of multi-level strategies as advocated by the Ecological Model of Behaviour change that proposes multi-level (combining psychological, social and environmental) strategies are likely to be more effective in reaching and engaging individuals susceptible to unhealthy eating habits than interventions operating on a single level.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Memristive effects in oxygenated amorphous carbon nanodevices
NASA Astrophysics Data System (ADS)
Bachmann, T. A.; Koelmans, W. W.; Jonnalagadda, V. P.; Le Gallo, M.; Santini, C. A.; Sebastian, A.; Eleftheriou, E.; Craciun, M. F.; Wright, C. D.
2018-01-01
Computing with resistive-switching (memristive) memory devices has shown much recent progress and offers an attractive route to circumvent the von-Neumann bottleneck, i.e. the separation of processing and memory, which limits the performance of conventional computer architectures. Due to their good scalability and nanosecond switching speeds, carbon-based resistive-switching memory devices could play an important role in this respect. However, devices based on elemental carbon, such as tetrahedral amorphous carbon or ta-C, typically suffer from a low cycling endurance. A material that has proven to be capable of combining the advantages of elemental carbon-based memories with simple fabrication methods and good endurance performance for binary memory applications is oxygenated amorphous carbon, or a-CO x . Here, we examine the memristive capabilities of nanoscale a-CO x devices, in particular their ability to provide the multilevel and accumulation properties that underpin computing type applications. We show the successful operation of nanoscale a-CO x memory cells for both the storage of multilevel states (here 3-level) and for the provision of an arithmetic accumulator. We implement a base-16, or hexadecimal, accumulator and show how such a device can carry out hexadecimal arithmetic and simultaneously store the computed result in the self-same a-CO x cell, all using fast (sub-10 ns) and low-energy (sub-pJ) input pulses.
NASA Astrophysics Data System (ADS)
Liu, Jian; Yang, Huafeng; Ma, Zhongyuan; Chen, Kunji; Zhang, Xinxin; Huang, Xinfan; Oda, Shunri
2018-01-01
We reported an Al2O3/HfO2/Al2O3 sandwich structure resistive switching device with significant improvement of multilevel cell (MLC) operation capability, which exhibited that four stable and distinct resistance states (one low resistance state and three high resistance states) can be achieved by controlling the Reset stop voltages (V Reset-stop) during the Reset operation. The improved MLC operation capability can be attributed to the R HRS/R LRS ratio enhancement resulting from increasing of the series resistance and decreasing of leakage current by inserting two Al2O3 layers. For the high-speed switching applications, we studied the initial switching dynamics by using the measurements of the pulse width and amplitude dependence of Set and Reset switching characteristics. The results showed that under the same pulse amplitude conditions, the initial Set progress is faster than the initial Reset progress, which can be explained by thermal-assisted electric field induced rupture model in the oxygen vacancies conductive filament. Thus, proper combination of varying pulse amplitude and width can help us to optimize the device operation parameters. Moreover, the device demonstrated ultrafast program/erase speed (10 ns) and good pulse switching endurance (105 cycles) characteristics, which are suitable for high-density and fast-speed nonvolatile memory applications.
French, Deborah; Terrazas, Enrique
2013-01-01
Interfacing complex laboratory equipment to laboratory information systems (LIS) has become a more commonly encountered problem in clinical laboratories, especially for instruments that do not have an interface provided by the vendor. Liquid chromatography-tandem mass spectrometry is a great example of such complex equipment, and has become a frequent addition to clinical laboratories. As the testing volume on such instruments can be significant, manual data entry will also be considerable and the potential for concomitant transcription errors arises. Due to this potential issue, our aim was to interface an AB SCIEX™ mass spectrometer to our Sunquest(®) LIS. WE LICENSED SOFTWARE FOR THE DATA MANAGEMENT INTERFACE FROM THE UNIVERSITY OF PITTSBURGH, BUT EXTENDED THIS WORK AS FOLLOWS: The interface was designed so that it would accept a text file exported from the AB SCIEX™ × 5500 QTrap(®) mass spectrometer, pre-process the file (using newly written code) into the correct format and upload it into Sunquest(®) via file transfer protocol. The licensed software handled the majority of the interface tasks with the exception of converting the output from the Analyst(®) software to the required Sunquest(®) import format. This required writing of a "pre-processor" by one of the authors which was easily integrated with the supplied software. We successfully implemented the data management interface licensed from the University of Pittsburgh. Given the coding that was required to write the pre-processor, and alterations to the source code that were performed when debugging the software, we would suggest that before a laboratory decides to implement such an interface, it would be necessary to have a competent computer programmer available.
Edge enhancement algorithm for low-dose X-ray fluoroscopic imaging.
Lee, Min Seok; Park, Chul Hee; Kang, Moon Gi
2017-12-01
Low-dose X-ray fluoroscopy has continually evolved to reduce radiation risk to patients during clinical diagnosis and surgery. However, the reduction in dose exposure causes quality degradation of the acquired images. In general, an X-ray device has a time-average pre-processor to remove the generated quantum noise. However, this pre-processor causes blurring and artifacts within the moving edge regions, and noise remains in the image. During high-pass filtering (HPF) to enhance edge detail, this noise in the image is amplified. In this study, a 2D edge enhancement algorithm comprising region adaptive HPF with the transient improvement (TI) method, as well as artifacts and noise reduction (ANR), was developed for degraded X-ray fluoroscopic images. The proposed method was applied in a static scene pre-processed by a low-dose X-ray fluoroscopy device. First, the sharpness of the X-ray image was improved using region adaptive HPF with the TI method, which facilitates sharpening of edge details without overshoot problems. Then, an ANR filter that uses an edge directional kernel was developed to remove the artifacts and noise that can occur during sharpening, while preserving edge details. The quantitative and qualitative results obtained by applying the developed method to low-dose X-ray fluoroscopic images and visually and numerically comparing the final images with images improved using conventional edge enhancement techniques indicate that the proposed method outperforms existing edge enhancement methods in terms of objective criteria and subjective visual perception of the actual X-ray fluoroscopic image. The developed edge enhancement algorithm performed well when applied to actual low-dose X-ray fluoroscopic images, not only by improving the sharpness, but also by removing artifacts and noise, including overshoot. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Sun, Shuyan; Pan, Wei
2014-01-01
As applications of multilevel modelling in educational research increase, researchers realize that multilevel data collected in many educational settings are often not purely nested. The most common multilevel non-nested data structure is one that involves student mobility in longitudinal studies. This article provides a methodological review of…
ERIC Educational Resources Information Center
Lee, Woo-yeol; Cho, Sun-Joo
2017-01-01
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Alternative Methods for Assessing Mediation in Multilevel Data: The Advantages of Multilevel SEM
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zhang, Zhen; Zyphur, Michael J.
2011-01-01
Multilevel modeling (MLM) is a popular way of assessing mediation effects with clustered data. Two important limitations of this approach have been identified in prior research and a theoretical rationale has been provided for why multilevel structural equation modeling (MSEM) should be preferred. However, to date, no empirical evidence of MSEM's…
Construction of Covariance Functions with Variable Length Fields
NASA Technical Reports Server (NTRS)
Gaspari, Gregory; Cohn, Stephen E.; Guo, Jing; Pawson, Steven
2005-01-01
This article focuses on construction, directly in physical space, of three-dimensional covariance functions parametrized by a tunable length field, and on an application of this theory to reproduce the Quasi-Biennial Oscillation (QBO) in the Goddard Earth Observing System, Version 4 (GEOS-4) data assimilation system. These Covariance models are referred to as multi-level or nonseparable, to associate them with the application where a multi-level covariance with a large troposphere to stratosphere length field gradient is used to reproduce the QBO from sparse radiosonde observations in the tropical lower stratosphere. The multi-level covariance functions extend well-known single level covariance functions depending only on a length scale. Generalizations of the first- and third-order autoregressive covariances in three dimensions are given, providing multi-level covariances with zero and three derivatives at zero separation, respectively. Multi-level piecewise rational covariances with two continuous derivatives at zero separation are also provided. Multi-level powerlaw covariances are constructed with continuous derivatives of all orders. Additional multi-level covariance functions are constructed using the Schur product of single and multi-level covariance functions. A multi-level powerlaw covariance used to reproduce the QBO in GEOS-4 is described along with details of the assimilation experiments. The new covariance model is shown to represent the vertical wind shear associated with the QBO much more effectively than in the baseline GEOS-4 system.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Resche-Rigon, Matthieu; White, Ian R
2018-06-01
In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.
DC-DC Type High-Frequency Link DC for Improved Power Quality of Cascaded Multilevel Inverter
NASA Astrophysics Data System (ADS)
Sadikin, Muhammad; Senjyu, Tomonobu; Yona, Atsushi
2013-06-01
Multilevel inverters are emerging as a new breed of power converter options for power system applications. Recent advances in power switching devices enabled the suitability of multilevel inverters for high voltage and high power applications because they are connecting several devices in series without the need of component matching. Usually, a transformerless battery energy storage system, based on a cascaded multilevel inverter, is used as a measure for voltage and frequency deviations. System can be reduced in size, weight, and cost of energy storage system. High-frequency link circuit topology is advantageous in realizing compact and light-weight power converters for uninterruptible power supply systems, new energy systems using photovoltaic-cells, fuel-cells and so on. This paper presents a DC-DC type high-frequency link DC (HFLDC) cascaded multilevel inverter. Each converter cell is implemented a control strategy for two H-bridge inverters that are controlled with the same multicarrier pulse width modulation (PWM) technique. The proposed cascaded multilevel inverter generates lower voltage total harmonic distortion (THD) in comparison with conventional cascaded multilevel inverter. Digital simulations are carried out using PSCAD/EMTDC to validate the performance of the proposed cascaded multilevel inverter.
Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres
NASA Astrophysics Data System (ADS)
Judge, Philip G.
2017-12-01
We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation
NASA Astrophysics Data System (ADS)
Cho, Min Hyung; Cai, Wei
2010-12-01
A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.; Gentile, D. P.
1989-01-01
A NASTRAN bulk dataset preprocessor was developed to facilitate the integration of filamentary composite laminate properties into composite structural resizing for stiffness requirements. The NASCOMP system generates delta stiffness and delta mass matrices for input to the flutter derivative program. The flutter baseline analysis, derivative calculations, and stiffness and mass matrix updates are controlled by engineer defined processes under an operating system called CBUS. A multi-layered design variable grid system permits high fidelity resizing without excessive computer cost. The NASCOMP system uses ply layup drawings for basic input. The aeroelastic resizing for stiffness capability was used during an actual design exercise.
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Gates, R. M.; Straayer, J. W.
1975-01-01
The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.
Ultra-high-speed optical transmission using digital-preprocessed analog-multiplexed DAC
NASA Astrophysics Data System (ADS)
Yamazaki, Hiroshi; Nagatani, Munehiko; Hamaoka, Fukutaro; Horikoshi, Kengo; Nakamura, Masanori; Matsushita, Asuka; Kanazawa, Shigeru; Hashimoto, Toshikazu; Nosaka, Hideyuki; Miyamoto, Yutaka
2018-02-01
In advanced fiber transmission systems with digital signal processors (DSPs), analog bandwidths of digital-to-analog converters (DACs), which interface the DSPs and optics, are the major factors limiting the data rates. We have developed a technology to extend the DACs' bandwidth using a digital preprocessor, two sub-DACs, and an analog multiplexer. This technology enables us to generate baseband signals with bandwidths of up to around 60 GHz, which is almost twice that of signals generated by typical CMOS DACs. In this paper, we describe the principle of the bandwidth extension and review high-speed transmission experiments enabled by this technology.
Research of TREETOPS Structural Dynamics Controls Simulation Upgrade
NASA Technical Reports Server (NTRS)
Yates, Rose M.
1996-01-01
Under the provisions of contract number NAS8-40194, which was entitled 'TREETOPS Structural Dynamics and Controls Simulation System Upgrade', Oakwood College contracted to produce an upgrade to the existing TREETOPS suite of analysis tools. This suite includes the main simulation program, TREETOPS, two interactive preprocessors, TREESET and TREEFLX, an interactive post processor, TREEPLOT, and an adjunct program, TREESEL. A 'Software Design Document', which provides descriptions of the argument lists and internal variables for each subroutine in the TREETOPS suite, was established. Additionally, installation guides for both DOS and UNIX platforms were developed. Finally, updated User's Manuals, as well as a Theory Manual, were generated.
AutoCAD-To-GIFTS Translator Program
NASA Technical Reports Server (NTRS)
Jones, Andrew
1989-01-01
AutoCAD-to-GIFTS translator program, ACTOG, developed to facilitate quick generation of small finite-element models using CASA/GIFTS finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Geometric entities recognized by ACTOG include points, lines, arcs, solids, three-dimensional lines, and three-dimensional faces. From this information, ACTOG creates GIFTS SRC file, which then reads into GIFTS preprocessor BULKM or modified and reads into EDITM to create finite-element model. SRC file used as is or edited for any number of uses. Written in Microsoft Quick-Basic (Version 2.0).
Multilevel Interventions: Study Design and Analysis Issues
Gross, Cary P.; Zaslavsky, Alan M.; Taplin, Stephen H.
2012-01-01
Multilevel interventions, implemented at the individual, physician, clinic, health-care organization, and/or community level, increasingly are proposed and used in the belief that they will lead to more substantial and sustained changes in behaviors related to cancer prevention, detection, and treatment than would single-level interventions. It is important to understand how intervention components are related to patient outcomes and identify barriers to implementation. Designs that permit such assessments are uncommon, however. Thus, an important way of expanding our knowledge about multilevel interventions would be to assess the impact of interventions at different levels on patients as well as the independent and synergistic effects of influences from different levels. It also would be useful to assess the impact of interventions on outcomes at different levels. Multilevel interventions are much more expensive and complicated to implement and evaluate than are single-level interventions. Given how little evidence there is about the value of multilevel interventions, however, it is incumbent upon those arguing for this approach to do multilevel research that explicates the contributions that interventions at different levels make to the desired outcomes. Only then will we know whether multilevel interventions are better than more focused interventions and gain greater insights into the kinds of interventions that can be implemented effectively and efficiently to improve health and health care for individuals with cancer. This chapter reviews designs for assessing multilevel interventions and analytic ways of controlling for potentially confounding variables that can account for the complex structure of multilevel data. PMID:22623596
Using PHP/MySQL to Manage Potential Mass Impacts
NASA Technical Reports Server (NTRS)
Hager, Benjamin I.
2010-01-01
This paper presents a new application using commercially available software to manage mass properties for spaceflight vehicles. PHP/MySQL(PHP: Hypertext Preprocessor and My Structured Query Language) are a web scripting language and a database language commonly used in concert with each other. They open up new opportunities to develop cutting edge mass properties tools, and in particular, tools for the management of potential mass impacts (threats and opportunities). The paper begins by providing an overview of the functions and capabilities of PHP/MySQL. The focus of this paper is on how PHP/MySQL are being used to develop an advanced "web accessible" database system for identifying and managing mass impacts on NASA's Ares I Upper Stage program, managed by the Marshall Space Flight Center. To fully describe this application, examples of the data, search functions, and views are provided to promote, not only the function, but the security, ease of use, simplicity, and eye-appeal of this new application. This paper concludes with an overview of the other potential mass properties applications and tools that could be developed using PHP/MySQL. The premise behind this paper is that PHP/MySQL are software tools that are easy to use and readily available for the development of cutting edge mass properties applications. These tools are capable of providing "real-time" searching and status of an active database, automated report generation, and other capabilities to streamline and enhance mass properties management application. By using PHP/MySQL, proven existing methods for managing mass properties can be adapted to present-day information technology to accelerate mass properties data gathering, analysis, and reporting, allowing mass property management to keep pace with today's fast-pace design and development processes.
Variable-speed wind power system with improved energy capture via multilevel conversion
Erickson, Robert W.; Al-Naseem, Osama A.; Fingersh, Lee Jay
2005-05-31
A system and method for efficiently capturing electrical energy from a variable-speed generator are disclosed. The system includes a matrix converter using full-bridge, multilevel switch cells, in which semiconductor devices are clamped to a known constant DC voltage of a capacitor. The multilevel matrix converter is capable of generating multilevel voltage wave waveform of arbitrary magnitude and frequencies. The matrix converter can be controlled by using space vector modulation.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
Tanaka, Gouhei; Aihara, Kazuyuki
2009-09-01
A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.
Long bone reconstruction using multilevel lengthening of bone defect fragments.
Borzunov, Dmitry Y
2012-08-01
This paper presents experimental findings to substantiate the use of multilevel bone fragment lengthening for managing extensive long bone defects caused by diverse aetiologies and shows its clinical introduction which could provide a solution for the problem of reducing the total treatment time. Both experimental and clinical multilevel lengthening to bridge bone defect gaps was performed with the use of the Ilizarov method only. The experimental findings and clinical outcomes showed that multilevel defect fragment lengthening could provide sufficient bone formation and reduction of the total osteosynthesis time in one stage as compared to traditional Ilizarov bone transport. The method of multilevel regeneration enabled management of critical-size defects that measured on average 13.5 ± 0.7 cm in 78 patients. The experimental and clinical results proved the efficiency of the Ilizarov non-free multilevel bone plasty that can be recommended for practical use.
2014-01-01
Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the signs of parameters and the significance levels between the two models in this case study. Conclusions The integrated approach proposed in this paper is a useful tool for understanding the geographical distribution of self-rated health status within a multilevel framework. In future research, it would be useful to apply the spatially filtered multilevel model to other datasets in order to clarify the differences between the two models. It is anticipated that this integrated method will also out-perform conventional models when it is used in other contexts. PMID:24571639
Simulator for multilevel optimization research
NASA Technical Reports Server (NTRS)
Padula, S. L.; Young, K. C.
1986-01-01
A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
Multi-level obstruction in obstructive sleep apnoea: prevalence, severity and predictive factors.
Phua, C Q; Yeo, W X; Su, C; Mok, P K H
2017-11-01
To characterise multi-level obstruction in terms of prevalence, obstructive sleep apnoea severity and predictive factors, and to collect epidemiological data on upper airway morphology in obstructive sleep apnoea patients. Retrospective review of 250 obstructive sleep apnoea patients. On clinical examination, 171 patients (68.4 per cent) had multi-level obstruction, 49 (19.6 per cent) had single-level obstruction and 30 (12 per cent) showed no obstruction. Within each category of obstructive sleep apnoea severity, multi-level obstruction was more prevalent. Multi-level obstruction was associated with severe obstructive sleep apnoea (more than 30 events per hour) (p = 0.001). Obstructive sleep apnoea severity increased with the number of obstruction sites (correlation coefficient = 0.303, p < 0.001). Multi-level obstruction was more likely in younger (p = 0.042), male (p = 0.045) patients, with high body mass index (more than 30 kg/m2) (p < 0.001). Palatal (p = 0.004), tongue (p = 0.026) and lateral pharyngeal wall obstructions (p = 0.006) were associated with severe obstructive sleep apnoea. Multi-level obstruction is more prevalent in obstructive sleep apnoea and is associated with increased severity. Obstruction at certain anatomical levels contributes more towards obstructive sleep apnoea severity.
ERIC Educational Resources Information Center
Connections: A Journal of Adult Literacy, 1997
1997-01-01
This issue contains 12 articles written by teachers who have investigated various aspects of the multilevel question in their own classrooms. "The Multilevel Question" (Lenore Balliro) provides an introduction. "Deconstructing the Great Wall of Print" (Richard Goldberg) investigates reading strategies that allow students with a wide range of…
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Multi-level trellis coded modulation and multi-stage decoding
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu
1990-01-01
Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.
Huo, Zhiguang; Tseng, George
2017-01-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K-means (is-K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is-K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency. PMID:28959370
Huo, Zhiguang; Tseng, George
2017-06-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques
NASA Astrophysics Data System (ADS)
Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.
2017-05-01
With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
A Multilevel Assessment of Differential Item Functioning.
ERIC Educational Resources Information Center
Shen, Linjun
A multilevel approach was proposed for the assessment of differential item functioning and compared with the traditional logistic regression approach. Data from the Comprehensive Osteopathic Medical Licensing Examination for 2,300 freshman osteopathic medical students were analyzed. The multilevel approach used three-level hierarchical generalized…
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
The Cortex Transform as an image preprocessor for sparse distributed memory: An initial study
NASA Technical Reports Server (NTRS)
Olshausen, Bruno; Watson, Andrew
1990-01-01
An experiment is described which was designed to evaluate the use of the Cortex Transform as an image processor for Sparse Distributed Memory (SDM). In the experiment, a set of images were injected with Gaussian noise, preprocessed with the Cortex Transform, and then encoded into bit patterns. The various spatial frequency bands of the Cortex Transform were encoded separately so that they could be evaluated based on their ability to properly cluster patterns belonging to the same class. The results of this study indicate that by simply encoding the low pass band of the Cortex Transform, a very suitable input representation for the SDM can be achieved.
Real-time separation of multineuron recordings with a DSP32C signal processor.
Gädicke, R; Albus, K
1995-04-01
We have developed a hardware and software package for real-time discrimination of multiple-unit activities recorded simultaneously from multiple microelectrodes using a VME-Bus system. Compared with other systems cited in literature or commercially available, our system has the following advantages. (1) Each electrode is served by its own preprocessor (DSP32C); (2) On-line spike discrimination is performed independently for each electrode. (3) The VME-bus allows processing of data received from 16 electrodes. The digitized (62.5 kHz) spike form is itself used as the model spike; the algorithm allows for comparing and sorting complete wave forms in real time into 8 different models per electrode.
PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Suhs, Norman; Dietz, William; Rogers, Stuart; Nash, Steve; Chan, William; Tramel, Robert; Onufer, Jeff
2006-01-01
This viewgraph presentation reviews the use and requirements of Pegasus 5. PEGASUS 5 is a code which performs a pre-processing step for the Overset CFD method. The code prepares the overset volume grids for the flow solver by computing the domain connectivity database, and blanking out grid points which are contained inside a solid body. PEGASUS 5 successfully automates most of the overset process. It leads to dramatic reduction in user input over previous generations of overset software. It also can lead to an order of magnitude reduction in both turn-around time and user expertise requirements. It is also however not a "black-box" procedure; care must be taken to examine the resulting grid system.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
A Survey of Plasmas and Their Applications
NASA Technical Reports Server (NTRS)
Eastman, Timothy E.; Grabbe, C. (Editor)
2006-01-01
Plasmas are everywhere and relevant to everyone. We bath in a sea of photons, quanta of electromagnetic radiation, whose sources (natural and artificial) are dominantly plasma-based (stars, fluorescent lights, arc lamps.. .). Plasma surface modification and materials processing contribute increasingly to a wide array of modern artifacts; e.g., tiny plasma discharge elements constitute the pixel arrays of plasma televisions and plasma processing provides roughly one-third of the steps to produce semiconductors, essential elements of our networking and computing infrastructure. Finally, plasmas are central to many cutting edge technologies with high potential (compact high-energy particle accelerators; plasma-enhanced waste processors; high tolerance surface preparation and multifuel preprocessors for transportation systems; fusion for energy production).
RFI Detection and Mitigation using Independent Component Analysis as a Pre-Processor
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Gholian, Armen; Bradley, Damon C.; Wong, Mark; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.
2016-01-01
Radio-frequency interference (RFI) has negatively impacted scientific measurements of passive remote sensing satellites. This has been observed in the L-band radiometers Soil Moisture and Ocean Salinity (SMOS), Aquarius and more recently, Soil Moisture Active Passive (SMAP). RFI has also been observed at higher frequencies such as K band. Improvements in technology have allowed wider bandwidth digital back ends for passive microwave radiometry. A complex signal kurtosis radio frequency interference detector was developed to help identify corrupted measurements. This work explores the use of Independent Component Analysis (ICA) as a blind source separation (BSS) technique to pre-process radiometric signals for use with the previously developed real and complex signal kurtosis detectors.
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
Post test review of a single car test of multi-level passenger equipment
DOT National Transportation Integrated Search
2008-04-22
The single car test of multi-level equipment described in : this paper was designed to help evaluate the crashworthiness of : a multi-level car in a controlled collision. The data collected : from this test will be used to refine engineering models. ...
Multilevel Modeling of Social Segregation
ERIC Educational Resources Information Center
Leckie, George; Pillinger, Rebecca; Jones, Kelvyn; Goldstein, Harvey
2012-01-01
The traditional approach to measuring segregation is based upon descriptive, non-model-based indices. A recently proposed alternative is multilevel modeling. The authors further develop the argument for a multilevel modeling approach by first describing and expanding upon its notable advantages, which include an ability to model segregation at a…
Multilevel and Diverse Classrooms
ERIC Educational Resources Information Center
Baurain, Bradley, Ed.; Ha, Phan Le, Ed.
2010-01-01
The benefits and advantages of classroom practices incorporating unity-in-diversity and diversity-in-unity are what "Multilevel and Diverse Classrooms" is all about. Multilevel classrooms--also known as mixed-ability or heterogeneous classrooms--are a fact of life in ESOL programs around the world. These classrooms are often not only…
Patel, Opal; Shahulhameed, Safraj; Shivashankar, Roopa; Tayyab, Mohammad; Rahman, Atiqur; Prabhakaran, Dorairaj; Tandon, Nikhil; Jaacks, Lindsay M
2017-07-19
The food environment has been implicated as an underlying contributor to the global obesity epidemic. However, few studies have evaluated the relationship between the food environment, dietary intake, and overweight/obesity in low- and middle-income countries (LMICs). The aim of this study was to assess the association of full service and fast food restaurant density with dietary intake and overweight/obesity in Delhi, India. Data are from a cross-sectional, population-based study conducted in Delhi. Using multilevel cluster random sampling, 5364 participants were selected from 134 census enumeration blocks (CEBs). Geographic information system data were available for 131 CEBs (n = 5264) from a field survey conducted using hand-held global positioning system devices. The number of full service and fast food restaurants within a 1-km buffer of CEBs was recorded by trained staff using ArcGIS software, and participants were assigned to tertiles of full service and fast food restaurant density based on their resident CEB. Height and weight were measured using standardized procedures and overweight/obesity was defined as a BMI ≥25 kg/m 2 . The most common full service and fast food restaurants were Indian savory restaurants (57.2%) and Indian sweet shops (25.8%). Only 14.1% of full service and fast food restaurants were Western style. After adjustment for age, household income, education, and tobacco and alcohol use, participants in the highest tertile of full service and fast food restaurant density were less likely to consume fruit and more likely to consume refined grains compared to participants in the lowest tertile (both p < 0.05). In unadjusted logistic regression models, participants in the highest versus lowest tertile of full service and fast food restaurant density were significantly more likely to be overweight/obese: odds ratio (95% confidence interval), 1.44 (1.24, 1.67). After adjustment for age, household income, and education, the effect was attenuated: 1.08 (0.92, 1.26). Results were consistent with further adjustment for tobacco and alcohol use, moderate physical activity, and owning a bicycle or motorized vehicle. Most full service and fast food restaurants were Indian, suggesting that the nutrition transition in this megacity may be better characterized by the large number of unhealthy Indian food outlets rather than the Western food outlets. Full service and fast food restaurant density in the residence area of adults in Delhi, India, was associated with poor dietary intake. It was also positively associated with overweight/obesity, but this was largely explained by socioeconomic status. Further research is needed exploring these associations prospectively and in other LMICs.
Multilevel Higher-Order Item Response Theory Models
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
ERIC Educational Resources Information Center
Schölmerich, Vera L. N.; Kawachi, Ichiro
2016-01-01
Scholars and practitioners frequently make recommendations to develop family planning interventions that are "multilevel." Such interventions take explicit account of the role of environments by incorporating multilevel or social-ecological frameworks into their design and implementation. However, research on how interventions have…
Multilevel Modeling: A Review of Methodological Issues and Applications
ERIC Educational Resources Information Center
Dedrick, Robert F.; Ferron, John M.; Hess, Melinda R.; Hogarty, Kristine Y.; Kromrey, Jeffrey D.; Lang, Thomas R.; Niles, John D.; Lee, Reginald S.
2009-01-01
This study analyzed the reporting of multilevel modeling applications of a sample of 99 articles from 13 peer-reviewed journals in education and the social sciences. A checklist, derived from the methodological literature on multilevel modeling and focusing on the issues of model development and specification, data considerations, estimation, and…
Building Path Diagrams for Multilevel Models
ERIC Educational Resources Information Center
Curran, Patrick J.; Bauer, Daniel J.
2007-01-01
Multilevel models have come to play an increasingly important role in many areas of social science research. However, in contrast to other modeling strategies, there is currently no widely used approach for graphically diagramming multilevel models. Ideally, such diagrams would serve two functions: to provide a formal structure for deriving the…
The goal of this study was to evaluate the possible use of the Environmental Relative Moldiness Index (ERMI) to quantify mold contamination in multi-level, office buildings. Settled-dust samples were collected in multi-level, office buildings and the ERMI value for each sample de...
Teaching Multilevel Adult ESL Classes. ERIC Digest.
ERIC Educational Resources Information Center
Shank, Cathy C.; Terrill, Lynda R.
Teachers in multilevel adult English-as-a-Second-Language classes are challenged to use a variety of materials, activities, and techniques to engage the interest of the learners and assist them in their educational goals. This digest recommends ways to choose and organize content for multilevel classes, explains grouping strategies, discusses a…
Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM
ERIC Educational Resources Information Center
Li, Xin; Beretvas, S. Natasha
2013-01-01
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Conducting Multilevel Analyses in Medical Education
ERIC Educational Resources Information Center
Zyphur, Michael J.; Kaplan, Seth A.; Islam, Gazi; Barsky, Adam P.; Franklin, Michael S.
2008-01-01
A significant body of education literature has begun using multilevel statistical models to examine data that reside at multiple levels of analysis. In order to provide a primer for medical education researchers, the current work gives a brief overview of some issues associated with multilevel statistical modeling. To provide an example of this…
Multilevel modelling: Beyond the basic applications.
Wright, Daniel B; London, Kamala
2009-05-01
Over the last 30 years statistical algorithms have been developed to analyse datasets that have a hierarchical/multilevel structure. Particularly within developmental and educational psychology these techniques have become common where the sample has an obvious hierarchical structure, like pupils nested within a classroom. We describe two areas beyond the basic applications of multilevel modelling that are important to psychology: modelling the covariance structure in longitudinal designs and using generalized linear multilevel modelling as an alternative to methods from signal detection theory (SDT). Detailed code for all analyses is described using packages for the freeware R.
Design and Implementation of 13 Levels Multilevel Inverter for Photovoltaic System
NASA Astrophysics Data System (ADS)
Subramani, C.; Dhineshkumar, K.; Palanivel, P.
2018-04-01
This paper approaches the appearing and modernization of S-Type PV based 13- level multilevel inverter with less quantity of switch. The current S-Type Multi level inverter contains more number of switches and voltage sources. Multilevel level inverter is a be understandable among the most gainful power converters for high power application and present day applications with reduced switches. The fundamental good arrangement of the 13-level multilevel inverter is to get ventured voltage from a couple of levels of DC voltages.. The controller gives actual way day and age to switches through driver circuit using PWM methodology. The execution assessment of proposed multilevel inverter is checked using MATLAB/Simulink. This is the outstanding among other techniquem appeared differently in relation to all other existing system
NASA Astrophysics Data System (ADS)
Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin
2017-08-01
In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.
Integrated structure/control law design by multilevel optimization
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Schmidt, David K.
1989-01-01
A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Chae, Heejoon; Lee, Sangseon; Seo, Seokjun; Jung, Daekyoung; Chang, Hyeonsook; Nephew, Kenneth P; Kim, Sun
2016-12-01
Measuring gene expression, DNA sequence variation, and DNA methylation status is routinely done using high throughput sequencing technologies. To analyze such multi-omics data and explore relationships, reliable bioinformatics systems are much needed. Existing systems are either for exploring curated data or for processing omics data in the form of a library such as R. Thus scientists have much difficulty in investigating relationships among gene expression, DNA sequence variation, and DNA methylation using multi-omics data. In this study, we report a system called BioVLAB-mCpG-SNP-EXPRESS for the integrated analysis of DNA methylation, sequence variation (SNPs), and gene expression for distinguishing cellular phenotypes at the pairwise and multiple phenotype levels. The system can be deployed on either the Amazon cloud or a publicly available high-performance computing node, and the data analysis and exploration of the analysis result can be conveniently done using a web-based interface. In order to alleviate analysis complexity, all the process are fully automated, and graphical workflow system is integrated to represent real-time analysis progression. The BioVLAB-mCpG-SNP-EXPRESS system works in three stages. First, it processes and analyzes multi-omics data as input in the form of the raw data, i.e., FastQ files. Second, various integrated analyses such as methylation vs. gene expression and mutation vs. methylation are performed. Finally, the analysis result can be explored in a number of ways through a web interface for the multi-level, multi-perspective exploration. Multi-level interpretation can be done by either gene, gene set, pathway or network level and multi-perspective exploration can be explored from either gene expression, DNA methylation, sequence variation, or their relationship perspective. The utility of the system is demonstrated by performing analysis of phenotypically distinct 30 breast cancer cell line data set. BioVLAB-mCpG-SNP-EXPRESS is available at http://biohealth.snu.ac.kr/software/biovlab_mcpg_snp_express/. Copyright © 2016 Elsevier Inc. All rights reserved.
A new spin on primordial hydrogen recombination and a refined model for spinning dust radiation
NASA Astrophysics Data System (ADS)
Ali-Haimoud, Yacine
2011-08-01
This thesis describes theoretical calculations in two subjects: the primordial recombination of the electron-proton plasma about 400,000 years after the Big Bang and electric dipole radiation from spinning dust grains in the present-day interstellar medium. Primordial hydrogen recombination has recently been the subject of a renewed attention because of the impact of its theoretical uncertainties on predicted cosmic microwave background (CMB) anisotropy power spectra. The physics of the primordial recombination problem can be divided into two qualitatively different aspects. On the one hand, a detailed treatment of the non-thermal radiation field in the optically thick Lyman lines is required for an accurate recombination history near the peak of the visibility function. On the other hand, stimulated recombinations and out-of equilibrium effects are important at late times and a multilevel calculation is required to correctly compute the low-redshift end of the ionization history. Another facet of the problem is the requirement of computational efficiency, as a large number of recombination histories must be evaluated in Markov chains when analyzing CMB data. In this thesis, an effective multilevel atom method is presented, that speeds up multilevel atom computations by more than 5 orders of magnitude. The impact of previously ignored radiative transfer effects is quantified, and explicitly shown to be negligible. Finally, the numerical implementation of a fast and highly accurate primordial recombination code partly written by the author is described. The second part of this thesis is devoted to one of the potential galactic foregrounds for CMB experiments: the rotational emission from small dust grains. The rotational state of dust grains is described, first classically, and assuming that grains are rotating about their axis of greatest inertia. This assumption is then lifted, and a quantum-mechanical calculation is presented for disk-like grains with a randomized nutation state. In both cases, the probability distribution for the total grain angular momentum is computed with a Fokker-Planck equation, and the resulting emissivity is evaluated, as a function of environmental parameters. These computations are implemented in a public code written by the author.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-01-01
Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510
Baldacchino, Alex; O'Rourke, Louise; Humphris, Gerry
2018-07-01
Alcohol Brief Interventions (ABI) have been implemented throughout Scotland since 2008 and aim to reduce hazardous drinking through a Scottish Government funded initiative delivered in a range of settings, including Accident and Emergency (A and E) departments. To study the extent to which Alcohol Brief Interventions (ABI) are associated with later health service use. An opportunistic informatics approach was applied. A unique patient identifier was used to link patient data with core datasets spanning two years previous and two years post ABI. Variables included inpatient attendance, outpatient attendance, psychiatric admissions, and A and E attendance and prescribing. Patients (N = 1704) who presented at A and E departments who reported an average alcohol consumption of more than 8 units daily received the ABI. Fast Alcohol Screening Test (FAST) was used to assess patients for hazardous alcohol consumption. Multilevel linear modelling was employed to predict post-intervention utilisation using pre-ABI variables and controlling for person characteristics and venue. Significant decrease in A and E usage was found at one and two years following the ABI intervention. Previous health service use was predictive of later service use. A single question (Item 4) on the FAST was predictive of A and E attendance at one and two years. This investigation and methodology used provide support for the delivery of the ABI. However, it cannot be ascertained whether this is due to the ABI or simply is a result of making contact with a specialist in the addiction field. Copyright © 2018 Elsevier B.V. All rights reserved.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo
2018-05-10
Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.
Effects of Teacher-Student Relationships on Peer Harassment: A Multilevel Study
ERIC Educational Resources Information Center
Lucas-Molina, Beatriz; Williamson, Ariel A.; Pulido, Rosa; Pérez-Albéniz, Alicia
2015-01-01
Peer harassment is a major social problem affecting children and adolescents internationally. Much research has focused on student-to-student harassment from either an individual or a multilevel perspective. There is a paucity of multilevel research on students' relationships with the classroom teacher. The purpose of this study was to use a…
Coping with Multi-Level Classes Effectively and Creatively.
ERIC Educational Resources Information Center
Strasheim, Lorraine A.
This paper includes a discussion of the problem of multilevel Latin classes, a description of various techniques and perspectives the teacher might use in dealing with these classes, and copies of materials and exercises that have proved useful in multilevel classes. Because the reasons for the existence of such classes are varied, it is suggested…
Multilevel Evaluation Alignment: An Explication of a Four-Step Model
ERIC Educational Resources Information Center
Yang, Huilan; Shen, Jianping; Cao, Honggao; Warfield, Charles
2004-01-01
Using the evaluation work on the W.K. Kellogg Foundation's Unleashing Resources Initiative as an example, in this article we explicate a general four-step model appropriate for multilevel evaluation alignment. We review the relevant literature, argue for the need for evaluation alignment in a multilevel context, explain the four-step model,…
Alternatives to Multilevel Modeling for the Analysis of Clustered Data
ERIC Educational Resources Information Center
Huang, Francis L.
2016-01-01
Multilevel modeling has grown in use over the years as a way to deal with the nonindependent nature of observations found in clustered data. However, other alternatives to multilevel modeling are available that can account for observations nested within clusters, including the use of Taylor series linearization for variance estimation, the design…
Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI
ERIC Educational Resources Information Center
Forer, Barry; Zumbo, Bruno D.
2011-01-01
The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
ERIC Educational Resources Information Center
Upton, Matthew G.; Egan, Toby Marshall
2007-01-01
The established limitations of career development (CD) theory and human resource development (HRD) theory building are addressed by expanding the framing of these issues to multilevel contexts. Multilevel theory building is an approach most effectively aligned with HRD literature and CD and HRD practice realities. An innovative approach multilevel…
A General Multilevel SEM Framework for Assessing Multilevel Mediation
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Zyphur, Michael J.; Zhang, Zhen
2010-01-01
Several methods for testing mediation hypotheses with 2-level nested data have been proposed by researchers using a multilevel modeling (MLM) paradigm. However, these MLM approaches do not accommodate mediation pathways with Level-2 outcomes and may produce conflated estimates of between- and within-level components of indirect effects. Moreover,…
Multilevel Motivation and Engagement: Assessing Construct Validity across Students and Schools
ERIC Educational Resources Information Center
Martin, Andrew J.; Malmberg, Lars-Erik; Liem, Gregory Arief D.
2010-01-01
Statistical biases associated with single-level analyses underscore the importance of partitioning variance/covariance matrices into individual and group levels. From a multilevel perspective based on data from 21,579 students in 58 high schools, the present study assesses the multilevel factor structure of motivation and engagement with a…
The Consequences of Ignoring Individuals' Mobility in Multilevel Growth Models: A Monte Carlo Study
ERIC Educational Resources Information Center
Luo, Wen; Kwok, Oi-man
2012-01-01
In longitudinal multilevel studies, especially in educational settings, it is fairly common that participants change their group memberships over time (e.g., students switch to different schools). Participant's mobility changes the multilevel data structure from a purely hierarchical structure with repeated measures nested within individuals and…
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
The Effects of Autonomy and Empowerment on Employee Turnover: Test of a Multilevel Model in Teams
ERIC Educational Resources Information Center
Liu, Dong; Zhang, Shu; Wang, Lei; Lee, Thomas W.
2011-01-01
Extending research on voluntary turnover in the team setting, this study adopts a multilevel self-determination theoretical approach to examine the unique roles of individual and social-contextual motivational precursors, autonomy orientation and autonomy support, in reducing team member voluntary turnover. Analysis of multilevel time-lagged data…
ERIC Educational Resources Information Center
Gochhayat, Jyotiranjan; Giri, Vijai N.; Suar, Damodar
2017-01-01
This study provides a new conceptualization of educational leadership with a multilevel and integrative approach. It examines the impact of multilevel leadership (MLL) on the effectiveness of technical educational institutes through the mediating effects of organizational communication, bases of power and organizational culture. Data were…
Multilevel corporate environmental responsibility.
Karassin, Orr; Bar-Haim, Aviad
2016-12-01
The multilevel empirical study of the antecedents of corporate social responsibility (CSR) has been identified as "the first knowledge gap" in CSR research. Based on an extensive literature review, the present study outlines a conceptual multilevel model of CSR, then designs and empirically validates an operational multilevel model of the principal driving factors affecting corporate environmental responsibility (CER), as a measure of CSR. Both conceptual and operational models incorporate three levels of analysis: institutional, organizational, and individual. The multilevel nature of the design allows for the assessment of the relative importance of the levels and of their components in the achievement of CER. Unweighted least squares (ULS) regression analysis reveals that the institutional-level variables have medium relationships with CER, some variables having a negative effect. The organizational level is revealed as having strong and positive significant relationships with CER, with organizational culture and managers' attitudes and behaviors as significant driving forces. The study demonstrates the importance of multilevel analysis in improving the understanding of CSR drivers, relative to single level models, even if the significance of specific drivers and levels may vary by context. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multilevel processes and cultural adaptation: Examples from past and present small-scale societies.
Reyes-García, V; Balbo, A L; Gomez-Baggethun, E; Gueze, M; Mesoudi, A; Richerson, P; Rubio-Campillo, X; Ruiz-Mallén, I; Shennan, S
2016-12-01
Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and emergent properties or collective behaviours. After a brief review of the concept of "cultural adaptation" from the perspective of cultural evolutionary theory and resilience theory, the core of the paper is constructed around the exploration of multilevel processes occurring at the temporal, spatial, social and political scales. We do so by examining small-scale societies' case studies. In each section, we discuss the importance of the selected scale for understanding cultural adaptation and then present an example that illustrates how multilevel processes in the selected scale help explain observed patterns in the cultural adaptive process. We end the paper discussing the potential of modelling and computer simulation for studying multilevel processes in cultural adaptation.
Multilevel processes and cultural adaptation: Examples from past and present small-scale societies
Reyes-García, V.; Balbo, A. L.; Gomez-Baggethun, E.; Gueze, M.; Mesoudi, A.; Richerson, P.; Rubio-Campillo, X.; Ruiz-Mallén, I.; Shennan, S.
2016-01-01
Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and emergent properties or collective behaviours. After a brief review of the concept of “cultural adaptation” from the perspective of cultural evolutionary theory and resilience theory, the core of the paper is constructed around the exploration of multilevel processes occurring at the temporal, spatial, social and political scales. We do so by examining small-scale societies’ case studies. In each section, we discuss the importance of the selected scale for understanding cultural adaptation and then present an example that illustrates how multilevel processes in the selected scale help explain observed patterns in the cultural adaptive process. We end the paper discussing the potential of modelling and computer simulation for studying multilevel processes in cultural adaptation. PMID:27774109
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valencia, Jayson F.; Dirks, James A.
2008-08-29
EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less
NASA Astrophysics Data System (ADS)
Nelson, R. R.; Taylor, T.; O'Dell, C.; Cronk, H. Q.; Partain, P.; Frankenberg, C.; Eldering, A.; Crisp, D.; Gunson, M. R.; Chang, A.; Fisher, B.; Osterman, G. B.; Pollock, H. R.; Savtchenko, A.; Rosenthal, E. J.
2015-12-01
Effective cloud and aerosol screening is critically important to the Orbiting Carbon Observatory-2 (OCO-2), which can accurately determine column averaged dry air mole fraction of carbon dioxide (XCO2) only when scenes are sufficiently clear of scattering material. It is crucial to avoid sampling biases, in order to maintain a globally unbiased XCO2 record for inversion modeling to determine sources and sinks of carbon dioxide. This work presents analysis from the current operational B7 data set, which is identifying as clear approximately 20% of the order one million daily soundings. Of those soundings that are passed to the L2 retrieval algorithm, we find that almost 80% are yielding XCO2 estimates that converge. Two primary preprocessor algorithms are used to cloud screen the OCO-2 soundings. The A-Band Preprocessor (ABP) uses measurements in the Oxygen-A band near 0.76 microns (mm) to determine scenes with large photon path length modifications due to scattering by aerosol and clouds. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) algorithm (IDP) computes ratios of retrieved CO2 (and H2O) in the 1.6mm (weak CO2) and 2.0mm (strong CO2) spectral bands to determine scenes with spectral differences, indicating contamination by scattering materials. We demonstrate that applying these two algorithms in tandem provides robust cloud screening of the OCO-2 data set. We compare the OCO-2 cloud screening results to collocated Moderate Resolution Imaging Spectroradiometer (MODIS) cloud mask data and show that agreement between the two sensors is approximately 85-90%. A detailed statistical analysis is performed on a winter and spring 16-day repeat cycle for the nadir-land, glint-land and glint-water viewing geometries. No strong seasonal, spatial or footprint dependencies are found, although the agreement tends to be worse at high solar zenith angles and for snow and ice covered surfaces.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-06-24
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping lase pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1987-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, David B.
1989-01-01
In the fabrication of multilevel integrated circuits, each metal layer is anarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1985-08-23
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Planarization of metal films for multilevel interconnects
Tuckerman, D.B.
1989-03-21
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration. 6 figs.
Schölmerich, Vera L N; Kawachi, Ichiro
2016-06-01
Scholars and practitioners frequently make recommendations to develop family planning interventions that are "multilevel." Such interventions take explicit account of the role of environments by incorporating multilevel or social-ecological frameworks into their design and implementation. However, research on how interventions have translated these concepts into practice in the field of family planning-and generally in public health-remains scarce. This article seeks to review the current definitions of multilevel interventions and their operationalization in the field of family planning. First, we highlight the divergent definitions of multilevel interventions and show the persistent ambiguity around this term. We argue that interventions involving activities at several levels but lacking targets (i.e., objectives) to create change on more than one level have not incorporated a social-ecological framework and should therefore not be considered as "multilevel." In a second step, we assess the extent to which family planning interventions have successfully incorporated a social-ecological framework. To this end, the 63 studies featured in Mwaikambo et al.'s systematic review on family planning interventions were reexamined. This assessment indicates that the multilevel or social-ecological perspective has seldom been translated into interventions. Specifically, the majority of interventions involved some form of activity at the community and/or organizational level, yet targeted and measured intrapersonal change as opposed to explicitly targeting/measuring environmental modification. © 2016 Society for Public Health Education.
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
Using Visual Analysis to Evaluate and Refine Multilevel Models of Single-Case Studies
ERIC Educational Resources Information Center
Baek, Eun Kyeng; Petit-Bois, Merlande; Van den Noortgate, Wim; Beretvas, S. Natasha; Ferron, John M.
2016-01-01
In special education, multilevel models of single-case research have been used as a method of estimating treatment effects over time and across individuals. Although multilevel models can accurately summarize the effect, it is known that if the model is misspecified, inferences about the effects can be biased. Concern with the potential for model…
ERIC Educational Resources Information Center
Schölmerich, Vera L. N.; Kawachi, Ichiro
2016-01-01
Multilevel interventions are inspired by socio-ecological models, and seek to create change on various levels--for example by increasing the health literacy of individuals as well as modifying the social norms within a community. Despite becoming a buzzword in public health, actual multilevel interventions remain scarce. In this commentary, we…
Multilevel Modeling and School Psychology: A Review and Practical Example
ERIC Educational Resources Information Center
Graves, Scott L., Jr.; Frohwerk, April
2009-01-01
The purpose of this article is to provide an overview of the state of multilevel modeling in the field of school psychology. The authors provide a systematic assessment of published research of multilevel modeling studies in 5 journals devoted to the research and practice of school psychology. In addition, a practical example from the nationally…
Teaching ESL in a Multilevel Classroom. Adult Education Series #13. Refugee Education Guide.
ERIC Educational Resources Information Center
Center for Applied Linguistics, Washington, DC. Language and Orientation Resource Center.
Adult refugee English as a second language (ESL) programs are often mandated to serve all who sign up for instruction, a requirement that results in multilevel classes. This guide describes and discusses this and other factors which contribute to the existence of multilevel and/or heterogeneous classes, and provides some practical approaches and…
Intermediate and advanced topics in multilevel logistic regression analysis
Merlo, Juan
2017-01-01
Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher‐level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within‐cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population‐average effect of covariates measured at the subject and cluster level, in contrast to the within‐cluster or cluster‐specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster‐level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28543517
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Intermediate and advanced topics in multilevel logistic regression analysis.
Austin, Peter C; Merlo, Juan
2017-09-10
Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
A Mixed Finite Volume Element Method for Flow Calculations in Porous Media
NASA Technical Reports Server (NTRS)
Jones, Jim E.
1996-01-01
A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.
De Mitri, N; Prampolini, G; Monti, S; Barone, V
2014-08-21
The properties of a low molecular weight organic dye, namely 4-naphthyloxy-1-methoxy-2,2,6,6-tetramethylpiperidine, covalently bound to an apolar polyolefin were investigated by means of a multi-level approach, combining classical molecular dynamics simulations, based on purposely parameterized force fields, and quantum mechanical calculations based on density functional theory (DFT) and its time-dependent extension (TD-DFT). The structure and dynamics of the dye in its embedding medium were analyzed and discussed taking the entangling effect of the surrounding polymer into account, and also by comparing the results to those obtained for a different environment, i.e. toluene solution. Finally, the influence was investigated of long lived cages found in the polymeric embedding on photophysical properties, in terms of the slow and fast dye's internal dynamics, by comparing computed IR and UV spectra with their experimental counterparts.
NASA Astrophysics Data System (ADS)
Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe
2017-08-01
A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.
Sadler, Richard C; Clark, Andrew F; Wilk, Piotr; O'Connor, Colleen; Gilliland, Jason A
2016-06-09
This study examines the influence of adolescents' exposure to unhealthy food outlets on junk food purchasing during trips between home and school, with particular attention to how exposure and purchasing differ according to child's biological sex, mode of transportation, and direction to or from school. Between 2010 and 2013, students (n = 654) aged 9-13 years from 25 schools in London and Middlesex County, ON, completed a socio-demographic survey and an activity diary (to identify food purchases), and were observed via a global positioning system for 2 weeks (to track routes for trips to/from school). Spatial data on routes and purchase data were integrated with a validated food outlet database in a geographic information system, and exposure was measured as the minutes a child spent within 50 m of an unhealthy food outlet (i.e., fast food restaurants, variety stores). For trips involving junk food exposure (n = 4588), multilevel logistic regression was used to assess the relationship between exposure and purchasing. Multilevel analyses indicated that adolescents' duration of exposure to unhealthy food outlets between home and school had a significant effect on the likelihood of junk food purchasing. This relationship remained significant when the data were stratified by sex (female/male), trip direction (to/from school) and travel mode (active/car), with the exception of adolescents who travelled by bus. Policies and programs that mitigate the concentration of unhealthy food outlets close to schools are critical for encouraging healthy eating behaviours among children and reducing diet-related health issues such as obesity.
Buman, Matthew P.; Mullane, Sarah L.; Toledo, Meynard J.; Rydell, Sarah A.; Gaesser, Glenn A.; Crespo, Noe C.; Hannan, Peter; Feltes, Linda; Vuong, Brenna; Pereira, Mark A
2016-01-01
Background American workers spend 70–80% of their time at work being sedentary. Traditional approaches to increase moderate-vigorous physical activity (MVPA) may be perceived to be harmful to productivity. Approaches that target reductions in sedentary behavior and/or increases in standing or light-intensity physical activity [LPA] may not interfere with productivity and may be more feasible to achieve through small changes accumulated throughout the workday. Methods/Design This group randomized trial (i.e., cluster randomized trial) will test the relative efficacy of two sedentary behavior focused interventions in 24 worksites across two states (N=720 workers). The MOVE+ intervention is a multilevel individual, social, environmental, and organizational intervention targeting increases in light-intensity physical activity in the workplace. The STAND+ intervention is the MOVE+ intervention with the addition of the installation and use of sit-stand workstations to reduce sedentary behavior and enhance light-intensity physical activity opportunities. Our primary outcome will be objectively-measured changes in sedentary behavior and light-intensity physical activity over 12 months, with additional process measures at 3 months and longer-term sustainability outcomes at 24 months. Our secondary outcomes will be a clustered cardiometabolic risk score (comprised of fasting glucose, insulin, triglycerides, HDL-cholesterol, and blood pressure), workplace productivity, and job satisfaction. Discussion This study will determine the efficacy of a multilevel workplace intervention (including the use of a sit-stand workstation) to reduce sedentary behavior and increase LPA and concomitant impact on cardiometabolic health, workplace productivity, and satisfaction. PMID:27940181
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
ERIC Educational Resources Information Center
Miller, Jeffrey R.; Piper, Tinka Markham; Ahern, Jennifer; Tracy, Melissa; Tardiff, Kenneth J.; Vlahov, David; Galea, Sandro
2005-01-01
Evidence on the relationship between income inequality and suicide is inconsistent. Data from the New York City Office of the Chief Medical Examiner for all fatal injuries was collected to conduct a multilevel case-control study. In multilevel models, suicide decedents (n = 374) were more likely than accident controls (n = 453) to reside in…
ERIC Educational Resources Information Center
Bulotsky-Shearer, Rebecca J.; Wen, Xiaoli; Faria, Ann-Marie; Hahs-Vaughn, Debbie L.; Korfmacher, Jon
2012-01-01
Guided by a developmental and ecological model, the study employed latent profile analysis to identify a multilevel typology of family involvement and Head Start classroom quality. Using the nationally representative Head Start Family and Child Experiences Survey (FACES 1997; N = 1870), six multilevel latent profiles were estimated, characterized…
ERIC Educational Resources Information Center
Park, Jungkyu; Yu, Hsiu-Ting
2016-01-01
The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…
Multilevel filtering elliptic preconditioners
NASA Technical Reports Server (NTRS)
Kuo, C. C. Jay; Chan, Tony F.; Tong, Charles
1989-01-01
A class of preconditioners is presented for elliptic problems built on ideas borrowed from the digital filtering theory and implemented on a multilevel grid structure. They are designed to be both rapidly convergent and highly parallelizable. The digital filtering viewpoint allows the use of filter design techniques for constructing elliptic preconditioners and also provides an alternative framework for understanding several other recently proposed multilevel preconditioners. Numerical results are presented to assess the convergence behavior of the new methods and to compare them with other preconditioners of multilevel type, including the usual multigrid method as preconditioner, the hierarchical basis method and a recent method proposed by Bramble-Pasciak-Xu.
Multi-Level Sequential Pattern Mining Based on Prime Encoding
NASA Astrophysics Data System (ADS)
Lianglei, Sun; Yun, Li; Jiang, Yin
Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.
Planarization of metal films for multilevel interconnects by pulsed laser heating
Tuckerman, David B.
1987-01-01
In the fabrication of multilevel integrated circuits, each metal layer is planarized by heating to momentarily melt the layer. The layer is melted by sweeping laser pulses of suitable width, typically about 1 microsecond duration, over the layer in small increments. The planarization of each metal layer eliminates irregular and discontinuous conditions between successive layers. The planarization method is particularly applicable to circuits having ground or power planes and allows for multilevel interconnects. Dielectric layers can also be planarized to produce a fully planar multilevel interconnect structure. The method is useful for the fabrication of VLSI circuits, particularly for wafer-scale integration.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-02-22
To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose-response effect) for data from a stepped-wedge design with a hierarchical structure. The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Routinely and annually collected national data on China from 2008 to 2012. 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Agreement and differences in estimates of dose-response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). The estimated dose-response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2-4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose-response among provinces, counties and facilities were estimated, and the 'lowest' or 'highest' units by their dose-response effects were pinpointed only by the multilevel RM model. For examining dose-response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
RATFOR user's guide version 2.0
NASA Technical Reports Server (NTRS)
Helmle, L. C.
1985-01-01
This document is a user's guide for RATFOR at Ames Research Center. The main part of the document is a general description of RATFOR, and the appendix is devoted to a machine specific implementation for the Cray X-MP. The general stylistic features of RATFOR are discussed, including the block structure, keywords, source code, format, and the notion of tokens. There is a section on the basic control structures (IF-ELSE, ELSE IF, WHILE, FOR, DO, REPEAT-UNTIL, BREAK, NEXT), and there is a section on the statements that extend FORTRAN's capabilities (DEFINE, MACRO, INCLUDE, STRING). THE appendix discusses everything needed to compile and run a basic job, the preprocessor options, the supported character sets, the generated listings, fatal errors, and program limitations and the differences from standard FORTRAN.
Crew activity and motion effects on the space station
NASA Technical Reports Server (NTRS)
Rochon, Brian V.; Scheer, Steven A.
1987-01-01
Among the significant sources of internal disturbances that must be considered in the design of space station vibration control systems are the loads induced on the structure from various crew activities. Flight experiment T013, flown on the second manned mission of Skylab, measured force and moment time histories for a range of preplanned crew motions and activities. This experiment has proved itself invaluable as a source of on-orbit crew induced loads that has allowed a space station forcing function data base to be built. This will enable forced response such as acceleration and deflections, attributable to crew activity, to be calculated. The flight experiment, resultant database and structural model pre-processor, analysis examples and areas of combined research shall be described.
Intensity dependent spread theory
NASA Technical Reports Server (NTRS)
Holben, Richard
1990-01-01
The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.
ERIC Educational Resources Information Center
Svehlik, Martin; Steinwender, Gerhard; Kraus, Tanja; Saraph, Vinay; Lehmann, Thomas; Linhart, Wolfgang E.; Zwick, Ernst B.
2011-01-01
Aim: Information on the timing and long-term outcome of single-event multilevel surgery in children with bilateral spastic cerebral palsy (CP) walking with flexed knee gait is limited. Based on our clinical experience, we hypothesized that older children with bilateral spastic CP would benefit more from single-event multilevel surgery than younger…
Highly-Efficient and Modular Medium-Voltage Converters
2015-09-28
HVDC modular multilevel converter in decoupled double synchronous reference frame for voltage oscillation reduction," IEEE Trans. Ind...Electron., vol. 29, pp. 77-88, Jan 2014. [10] M. Guan and Z. Xu, "Modeling and control of a modular multilevel converter -based HVDC system under...34 Modular multilevel converter design for VSC HVDC applications," IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 3, pp.
ERIC Educational Resources Information Center
Karakolidis, Anastasios; Pitsia, Vasiliki; Emvalotis, Anastassios
2016-01-01
The main aim of the present study was to carry out an in-depth examination of mathematics underperformance in Greece. By applying a binary multilevel model to the PISA 2012 data, this study investigated the factors which were linked to low achievement in mathematics. The multilevel analysis revealed that students' gender, immigration status,…
Parker, Scott L; Adogwa, Owoicho; Davis, Brandon J; Fulchiero, Erin; Aaronson, Oran; Cheng, Joseph; Devin, Clinton J; McGirt, Matthew J
2013-02-01
Two-year cost-utility study comparing minimally invasive (MIS) versus open multilevel hemilaminectomy in patients with degenerative lumbar spinal stenosis. The objective of the study was to determine whether MIS versus open multilevel hemilaminectomy for degenerative lumbar spinal stenosis is a cost-effective advancement in lumbar decompression surgery. MIS-multilevel hemilaminectomy for degenerative lumbar spinal stenosis allows for effective treatment of back and leg pain while theoretically minimizing blood loss, tissue injury, and postoperative recovery. No studies have evaluated comprehensive healthcare costs associated with multilevel hemilaminectomy procedures, nor assessed cost-effectiveness of MIS versus open multilevel hemilaminectomy. Fifty-four consecutive patients with lumbar stenosis undergoing multilevel hemilaminectomy through an MIS paramedian tubular approach (n=27) versus midline open approach (n=27) were included. Total back-related medical resource utilization, missed work, and health state values [quality adjusted life years (QALYs), calculated from EuroQuol-5D with US valuation] were assessed after 2-year follow-up. Two-year resource use was multiplied by unit costs based on Medicare national allowable payment amounts (direct cost) and work-day losses were multiplied by the self-reported gross-of-tax wage rate (indirect cost). Difference in mean total cost per QALY gained for MIS versus open hemilaminectomy was assessed as incremental cost-effectiveness ratio (ICER: COST(MIS)-COST(OPEN)/QALY(MIS)-QALY(OPEN)). MIS versus open cohorts were similar at baseline. MIS and open hemilaminectomy were associated with an equivalent cumulative gain of 0.72 QALYs 2 years after surgery. Mean direct medical costs, indirect societal costs, and total 2-year cost ($23,109 vs. $25,420; P=0.21) were similar between MIS and open hemilaminectomy. MIS versus open approach was associated with similar total costs and utility, making it a cost equivalent technology compared with the traditional open approach. MIS versus open multilevel hemilaminectomy was associated with similar cost over 2 years while providing equivalent improvement in QALYs. In our experience, MIS versus open multilevel hemilaminectomy is a cost equivalent technology for patients with lumbar stenosis-associated radicular pain.
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel.
Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G; Ruggeri, Kai
2016-01-01
Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed.
Bezrukova, Katerina; Spell, Chester S; Caldwell, David; Burger, Jerry M
2016-01-01
Integrating the literature on faultlines, conflict, and pay, we drew on the basic principles of multilevel theory and differentiated between group- and organizational-level faultlines to introduce a novel multilevel perspective on faultlines. Using multisource, multilevel data on 30 Major League Baseball (MLB) teams, we found that group-level faultlines were negatively associated with group performance, and that internally focused conflict exacerbated but externally focused conflict mitigated this effect. Organizational-level faultlines were negatively related to organizational performance, and were most harmful in organizations with high levels of compensation. Implications for groups and teams in the sports/entertainment and other industries are discussed. (c) 2016 APA, all rights reserved).
Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel
Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G.; Ruggeri, Kai
2016-01-01
Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed. PMID:27252672
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Multilevel Interventions To Address Health Disparities Show Promise In Improving Population Health.
Paskett, Electra; Thompson, Beti; Ammerman, Alice S; Ortega, Alexander N; Marsteller, Jill; Richardson, DeJuran
2016-08-01
Multilevel interventions are those that affect at least two levels of influence-for example, the patient and the health care provider. They can be experimental designs or natural experiments caused by changes in policy, such as the implementation of the Affordable Care Act or local policies. Measuring the effects of multilevel interventions is challenging, because they allow for interaction among levels, and the impact of each intervention must be assessed and translated into practice. We discuss how two projects from the National Institutes of Health's Centers for Population Health and Health Disparities used multilevel interventions to reduce health disparities. The interventions, which focused on the uptake of the human papillomavirus vaccine and community-level dietary change, had mixed results. The design and implementation of multilevel interventions are facilitated by input from the community, and more advanced methods and measures are needed to evaluate the impact of the various levels and components of such interventions. Project HOPE—The People-to-People Health Foundation, Inc.
Extending the Multi-level Method for the Simulation of Stochastic Biological Systems.
Lester, Christopher; Baker, Ruth E; Giles, Michael B; Yates, Christian A
2016-08-01
The multi-level method for discrete-state systems, first introduced by Anderson and Higham (SIAM Multiscale Model Simul 10(1):146-179, 2012), is a highly efficient simulation technique that can be used to elucidate statistical characteristics of biochemical reaction networks. A single point estimator is produced in a cost-effective manner by combining a number of estimators of differing accuracy in a telescoping sum, and, as such, the method has the potential to revolutionise the field of stochastic simulation. In this paper, we present several refinements of the multi-level method which render it easier to understand and implement, and also more efficient. Given the substantial and complex nature of the multi-level method, the first part of this work reviews existing literature, with the aim of providing a practical guide to the use of the multi-level method. The second part provides the means for a deft implementation of the technique and concludes with a discussion of a number of open problems.
Xu, Hongwei; Logan, John R.; Short, Susan E.
2014-01-01
Research on neighborhoods and health increasingly acknowledges the need to conceptualize, measure, and model spatial features of social and physical environments. In ignoring underlying spatial dynamics, we run the risk of biased statistical inference and misleading results. In this paper, we propose an integrated multilevel-spatial approach for Poisson models of discrete responses. In an empirical example of child mortality in 1880 Newark, New Jersey, we compare this multilevel-spatial approach with the more typical aspatial multilevel approach. Results indicate that spatially-defined egocentric neighborhoods, or distance-based measures, outperform administrative areal units, such as census units. In addition, although results did not vary by specific definitions of egocentric neighborhoods, they were sensitive to geographic scale and modeling strategy. Overall, our findings confirm that adopting a spatial-multilevel approach enhances our ability to disentangle the effect of space from that of place, and point to the need for more careful spatial thinking in population research on neighborhoods and health. PMID:24763980
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En
2015-06-01
Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.
Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.
2012-01-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956
Johnson, Sara B; Little, Todd D; Masyn, Katherine; Mehta, Paras D; Ghazarian, Sharon R
2017-06-01
Characterizing the determinants of child health and development over time, and identifying the mechanisms by which these determinants operate, is a research priority. The growth of precision medicine has increased awareness and refinement of conceptual frameworks, data management systems, and analytic methods for multilevel data. This article reviews key methodological challenges in cohort studies designed to investigate multilevel influences on child health and strategies to address them. We review and summarize methodological challenges that could undermine prospective studies of the multilevel determinants of child health and ways to address them, borrowing approaches from the social and behavioral sciences. Nested data, variation in intervals of data collection and assessment, missing data, construct measurement across development and reporters, and unobserved population heterogeneity pose challenges in prospective multilevel cohort studies with children. We discuss innovations in missing data, innovations in person-oriented analyses, and innovations in multilevel modeling to address these challenges. Study design and analytic approaches that facilitate the integration across multiple levels, and that account for changes in people and the multiple, dynamic, nested systems in which they participate over time, are crucial to fully realize the promise of precision medicine for children and adolescents. Copyright © 2017 Elsevier Inc. All rights reserved.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M
2012-06-01
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved
Yano, Elizabeth M; Green, Lawrence W; Glanz, Karen; Ayanian, John Z; Mittman, Brian S; Chollette, Veronica; Rubenstein, Lisa V
2012-05-01
The promise of widespread implementation of efficacious interventions across the cancer continuum into routine practice and policy has yet to be realized. Multilevel influences, such as communities and families surrounding patients or health-care policies and organizations surrounding provider teams, may determine whether effective interventions are successfully implemented. Greater recognition of the importance of these influences in advancing (or hindering) the impact of single-level interventions has motivated the design and testing of multilevel interventions designed to address them. However, implementing research evidence from single- or multilevel interventions into sustainable routine practice and policy presents substantive challenges. Furthermore, relatively few multilevel interventions have been conducted along the cancer care continuum, and fewer still have been implemented, disseminated, or sustained in practice. The purpose of this chapter is, therefore, to illustrate and examine the concepts underlying the implementation and spread of multilevel interventions into routine practice and policy. We accomplish this goal by using a series of cancer and noncancer examples that have been successfully implemented and, in some cases, spread widely. Key concepts across these examples include the importance of phased implementation, recognizing the need for pilot testing, explicit engagement of key stakeholders within and between each intervention level; visible and consistent leadership and organizational support, including financial and human resources; better understanding of the policy context, fiscal climate, and incentives underlying implementation; explication of handoffs from researchers to accountable individuals within and across levels; ample integration of multilevel theories guiding implementation and evaluation; and strategies for long-term monitoring and sustainability.
Green, Lawrence W.; Glanz, Karen; Ayanian, John Z.; Mittman, Brian S.; Chollette, Veronica; Rubenstein, Lisa V.
2012-01-01
The promise of widespread implementation of efficacious interventions across the cancer continuum into routine practice and policy has yet to be realized. Multilevel influences, such as communities and families surrounding patients or health-care policies and organizations surrounding provider teams, may determine whether effective interventions are successfully implemented. Greater recognition of the importance of these influences in advancing (or hindering) the impact of single-level interventions has motivated the design and testing of multilevel interventions designed to address them. However, implementing research evidence from single- or multilevel interventions into sustainable routine practice and policy presents substantive challenges. Furthermore, relatively few multilevel interventions have been conducted along the cancer care continuum, and fewer still have been implemented, disseminated, or sustained in practice. The purpose of this chapter is, therefore, to illustrate and examine the concepts underlying the implementation and spread of multilevel interventions into routine practice and policy. We accomplish this goal by using a series of cancer and noncancer examples that have been successfully implemented and, in some cases, spread widely. Key concepts across these examples include the importance of phased implementation, recognizing the need for pilot testing, explicit engagement of key stakeholders within and between each intervention level; visible and consistent leadership and organizational support, including financial and human resources; better understanding of the policy context, fiscal climate, and incentives underlying implementation; explication of handoffs from researchers to accountable individuals within and across levels; ample integration of multilevel theories guiding implementation and evaluation; and strategies for long-term monitoring and sustainability. PMID:22623601
Multilevel cervical laminectomy and fusion with posterior cervical cages
Bou Monsef, Jad N; Siemionow, Krzysztof B
2017-01-01
Context: Cervical spondylotic myelopathy (CSM) is a progressive disease that can result in significant disability. Single-level stenosis can be effectively decompressed through either anterior or posterior techniques. However, multilevel pathology can be challenging, especially in the presence of significant spinal stenosis. Three-level anterior decompression and fusion are associated with higher nonunion rates and prolonged dysphagia. Posterior multilevel laminectomies with foraminotomies jeopardize the bone stock required for stable fixation with lateral mass screws (LMSs). Aims: This is the first case series of multilevel laminectomy and fusion for CSM instrumented with posterior cervical cages. Settings and Design: Three patients presented with a history of worsening neck pain, numbness in bilateral upper extremities and gait disturbance, and examination findings consistent with myeloradiculopathy. Cervical magnetic resonance imaging demonstrated multilevel spondylosis resulting in moderate to severe bilateral foraminal stenosis at three cervical levels. Materials and Methods: The patients underwent a multilevel posterior cervical laminectomy and instrumented fusion with intervertebral cages placed between bilateral facet joints over three levels. Oswestry disability index and visual analog scores were collected preoperatively and at each follow-up. Pre- and post-operative images were analyzed for changes in cervical alignment and presence of arthrodesis. Results: Postoperatively, all patients showed marked improvement in neurological symptoms and neck pain. They had full resolution of radicular symptoms by 6 weeks postoperatively. At 12-month follow-up, they demonstrated solid arthrodesis on X-rays and computed tomography scan. Conclusions: Posterior cervical cages may be an alternative option to LMSs in multilevel cervical laminectomy and fusion for cervical spondylotic myeloradiculopathy. PMID:29403242
Faour, Mhamad; Anderson, Joshua T; Haas, Arnold R; Percy, Rick; Woods, Stephen T; Ahn, Uri M; Ahn, Nicholas U
2017-05-01
Retrospective cohort comparative study. To evaluate presurgical and surgical factors that affect return to work (RTW) status after multilevel cervical fusion, and to compare outcomes after multilevel cervical fusion for degenerative disc disease (DDD) versus radiculopathy. Cervical fusion provides more than 90% of symptomatic relief for radiculopathy and myelopathy. However, cervical fusion for DDD without radiculopathy is considered controversial. In addition, multilevel fusion is associated with poorer surgical outcomes with increased levels fused. Data of cervical comorbidities was collected from Ohio Bureau of Workers' Compensation for subjects with work-related injuries. The study population included subjects who underwent multilevel cervical fusion. Patients with radiculopathy or DDD were identified. Multivariate logistic regression was performed to identify factors that affect RTW status. Surgical and functional outcomes were compared between groups. Stable RTW status within 3 years after multilevel cervical fusion was negatively affected by: fusion for DDD, age > 55 years, preoperative opioid use, initial psychological evaluation before surgery, injury-to-surgery > 2 years and instrumentation.DDD group had lower rate of achieving stable RTW status (P= 0.0001) and RTW within 1 year of surgery (P= 0.0003) compared with radiculopathy group. DDD patients were less likely to have a stable RTW status [odds ratio, OR = 0.63 (0.50-0.79)] or RTW within 1 year after surgery [OR = 0.65 (0.52-0.82)].DDD group had higher rate of opioid use (P= 0.001), and higher rate of disability after surgery (P= 0.002). Multiple detriments affect stable RTW status after multilevel cervical fusion including DDD. DDD without radiculopathy was associated with lower RTW rates, less likelihood to return to work, higher disability, and higher opioid use after surgery. Multilevel cervical fusion for DDD may be counterproductive. Future studies should investigate further treatment options of DDD, and optimize patient selection criteria for surgical intervention. 3.
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364
A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.
Qi, Jin-Peng; Qi, Jie; Zhang, Qing
2016-01-01
Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.
Multilevel Modeling in Psychosomatic Medicine Research
Myers, Nicholas D.; Brincks, Ahnalee M.; Ames, Allison J.; Prado, Guillermo J.; Penedo, Frank J.; Benedict, Catherine
2012-01-01
The primary purpose of this manuscript is to provide an overview of multilevel modeling for Psychosomatic Medicine readers and contributors. The manuscript begins with a general introduction to multilevel modeling. Multilevel regression modeling at two-levels is emphasized because of its prevalence in psychosomatic medicine research. Simulated datasets based on some core ideas from the Familias Unidas effectiveness study are used to illustrate key concepts including: communication of model specification, parameter interpretation, sample size and power, and missing data. Input and key output files from Mplus and SAS are provided. A cluster randomized trial with repeated measures (i.e., three-level regression model) is then briefly presented with simulated data based on some core ideas from a cognitive behavioral stress management intervention in prostate cancer. PMID:23107843
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.
NASA Astrophysics Data System (ADS)
Ganev, Kostadin; Todorova, Angelina; Jordanov, Georgi; Gadzhev, Georgi; Syrakov, Dimiter; Miloshev, Nikolai; Prodanova, Maria
2010-05-01
The NATO SfP N 981393 project aims at developing of a unified Balkan region oriented modelling system for operational response to accidental releases of harmful gases in the atmosphere, which would be able to: 1.Perform highly acurate and reliable risk analysis and assessment for selected "hot spots"; 2.Support the emergency fast decisions with short-term regional scale forecast of the propagation of harmful gasesin case of accidental release; 3.Perform, in an off-line mode, a more detailed and comprehensive analysis of the possible longer-term impacts on the environment and human health and make the results available to the authorities and the public. The present paper describes the set up and the testing of the system, mainly focusing on the risk analysis mode. The modeling tool used in the system is the US EPA Models-3 System: WRF, CMAQ and SMOKE (partly). The CB05 toxic chemical mechanism, including chlorine reactions, is employed. The emission input exploits the high-resolution TNO emission inventory. The meteorological pre-processor WRF is driven by NCAR Final Reanalysis data and performs calculations in 3 nested domains, covering respectively the regions of South-Eastern Europe, Bulgaria, and the area surrounding the particular site. The risk assessment for the region of "Vereja Him" factory, Jambol, Bulgaria is performed on the basis of one-year long model calculations. The calculations with CMAQ chemical transport model are performed for the two inner domains. An ammount of 25 tons of chlorine is released two times daily in the innermost domain, and sepаrate calculations are performed for every release. The results are averaged over one year in order to evaluate the probability of exceeding some regulatory treshold value in each grid point. The completion of this task in a relatively short period of time was made possible by using the newly developed Grid computational environment, which allows for shared use of facilities in the research community.
Austin, Peter C; Wagner, Philippe; Merlo, Juan
2017-03-15
Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Wagner, Philippe; Merlo, Juan
2016-01-01
Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster‐specific random effects which allow one to partition the total individual variance into between‐cluster variation and between‐individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time‐to‐event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., ‘frailty’) Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27885709
Anderson, Emma L; Tilling, Kate; Fraser, Abigail; Macdonald-Wallis, Corrie; Emmett, Pauline; Cribb, Victoria; Northstone, Kate; Lawlor, Debbie A; Howe, Laura D
2013-07-01
Methods for the assessment of changes in dietary intake across the life course are underdeveloped. We demonstrate the use of linear-spline multilevel models to summarize energy-intake trajectories through childhood and adolescence and their application as exposures, outcomes, or mediators. The Avon Longitudinal Study of Parents and Children assessed children's dietary intake several times between ages 3 and 13 years, using both food frequency questionnaires (FFQs) and 3-day food diaries. We estimated energy-intake trajectories for 12,032 children using linear-spline multilevel models. We then assessed the associations of these trajectories with maternal body mass index (BMI), and later offspring BMI, and also their role in mediating the relation between maternal and offspring BMIs. Models estimated average and individual energy intake at 3 years, and linear changes in energy intake from age 3 to 7 years and from age 7 to 13 years. By including the exposure (in this example, maternal BMI) in the multilevel model, we were able to estimate the average energy-intake trajectories across levels of the exposure. When energy-intake trajectories are the exposure for a later outcome (in this case offspring BMI) or a mediator (between maternal and offspring BMI), results were similar, whether using a two-step process (exporting individual-level intercepts and slopes from multilevel models and using these in linear regression/path analysis), or a single-step process (multivariate multilevel models). Trajectories were similar when FFQs and food diaries were assessed either separately, or when combined into one model. Linear-spline multilevel models provide useful summaries of trajectories of dietary intake that can be used as an exposure, outcome, or mediator.
Single-fraction stereotactic body radiotherapy for spinal metastases from renal cell carcinoma.
Balagamwala, Ehsan H; Angelov, Lilyana; Koyfman, Shlomo A; Suh, John H; Reddy, Chandana A; Djemil, Toufik; Hunter, Grant K; Xia, Ping; Chao, Samuel T
2012-12-01
Stereotactic body radiotherapy (SBRT) has emerged as an important treatment option for spinal metastases from renal cell carcinoma (RCC) as a means to overcome RCC's inherent radioresistance. The authors reviewed the outcomes of SBRT for the treatment of RCC metastases to the spine at their institution, and they identified factors associated with treatment failure. Fifty-seven patients (88 treatment sites) with RCC metastases to the spine received single-fraction SBRT. Pain relief was based on the Brief Pain Inventory and was adjusted for narcotic use according to the Radiation Therapy Oncology Group protocol 0631. Toxicity was scored according to Common Toxicity Criteria for Adverse Events version 4.0. Radiographic failure was defined as infield or adjacent (within 1 vertebral body [VB]) failure on follow-up MRI. Multivariate analyses were performed to correlate outcomes with the following variables: epidural, paraspinal, single-level, or multilevel disease (2-5 sites); neural foramen involvement; and VB fracture prior to SBRT. Kaplan-Meier analysis and Cox proportional hazards modeling were used for statistical analysis. The median follow-up and survival periods were 5.4 months (range 0.3-38 months) and 8.3 months (range 1.5-38 months), respectively. The median time to radiographic failure and unadjusted pain progression were 26.5 and 26.0 months, respectively. The median time to pain relief (from date of simulation) and duration of pain relief (from date of treatment) were 0.9 months (range 0.1-4.4 months) and 5.4 months (range 0.1-37.4 months), respectively. Multivariate analyses demonstrated that multilevel disease (hazard ratio [HR] 3.5, p = 0.02) and neural foramen involvement (HR 3.4, p = 0.02) were correlated with radiographic failure; multilevel disease (HR 2.3, p = 0.056) and VB fracture (HR 2.4, p = 0.046) were correlated with unadjusted pain progression. One patient experienced Grade 3 nausea and vomiting; no other Grade 3 or 4 toxicities were observed. Twelve treatment sites (14%) were complicated by subsequent vertebral fractures. Stereotactic body radiotherapy for RCC metastases to the spine offers fast and durable pain relief with minimal toxicity. Stereotactic body radiotherapy seems optimal for patients who have solitary or few spinal metastases. Patients with neural foramen involvement are at an increased risk for failure.
An Integrated Magnetic Circuit Model and Finite Element Model Approach to Magnetic Bearing Design
NASA Technical Reports Server (NTRS)
Provenza, Andrew J.; Kenny, Andrew; Palazzolo, Alan B.
2003-01-01
A code for designing magnetic bearings is described. The code generates curves from magnetic circuit equations relating important bearing performance parameters. Bearing parameters selected from the curves by a designer to meet the requirements of a particular application are input directly by the code into a three-dimensional finite element analysis preprocessor. This means that a three-dimensional computer model of the bearing being developed is immediately available for viewing. The finite element model solution can be used to show areas of magnetic saturation and make more accurate predictions of the bearing load capacity, current stiffness, position stiffness, and inductance than the magnetic circuit equations did at the start of the design process. In summary, the code combines one-dimensional and three-dimensional modeling methods for designing magnetic bearings.
Inertial measurement unit pre-processors and post-flight STS-1 comparisons
NASA Technical Reports Server (NTRS)
Findlay, J. T.; Mcconnell, J. G.
1981-01-01
The flight results show that the relative tri-redundant Inertial Measurement Unit IMU performance throughout the entire entry flight was within the expected accuracy. Comparisons are presented which show differences in the accumulated sensed velocity changes as measured by the tri-redundant IMUs (in Mean Equator and Equinox of 1950.0), differences in the equivalent inertial Euler angles as measured with respect to the M50 system, and finally, preliminary instrument calibrations determined relative to the ensemble average measurement set. Also, differences in the derived body axes rates and accelerations are presented. Because of the excellent performance of the IMUs during the STS-1 entry, the selection as to which particular IMU would best serve as the dynamic data source for entry reconstruction is arbitrary.
Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0
NASA Technical Reports Server (NTRS)
Nolan, Sandra K.
1988-01-01
The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.
[Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].
Guo, Lin; Kang, Lili; Wang, Dandan
2013-02-01
The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.
On the symbolic manipulation and code generation for elasto-plastic material matrices
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Saleeb, A. F.; Wang, P. S.; Tan, H. Q.
1991-01-01
A computerized procedure for symbolic manipulations and FORTRAN code generation of an elasto-plastic material matrix for finite element applications is presented. Special emphasis is placed on expression simplifications during intermediate derivations, optimal code generation, and interface with the main program. A systematic procedure is outlined to avoid redundant algebraic manipulations. Symbolic expressions of the derived material stiffness matrix are automatically converted to RATFOR code which is then translated into FORTRAN statements through a preprocessor. To minimize the interface problem with the main program, a template file is prepared so that the translated FORTRAN statements can be merged into the file to form a subroutine (or a submodule). Three constitutive models; namely, von Mises plasticity, Drucker-Prager model, and a concrete plasticity model, are used as illustrative examples.
A generic multibody simulation
NASA Technical Reports Server (NTRS)
Hopping, K. A.; Kohn, W.
1986-01-01
Described is a dynamic simulation package which can be configured for orbital test scenarios involving multiple bodies. The rotational and translational state integration methods are selectable for each individual body and may be changed during a run if necessary. Characteristics of the bodies are determined by assigning components consisting of mass properties, forces, and moments, which are the outputs of user-defined environmental models. Generic model implementation is facilitated by a transformation processor which performs coordinate frame inversions. Transformations are defined in the initialization file as part of the simulation configuration. The simulation package includes an initialization processor, which consists of a command line preprocessor, a general purpose grammar, and a syntax scanner. These permit specifications of the bodies, their interrelationships, and their initial states in a format that is not dependent on a particular test scenario.
SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport
Provost, Alden M.
2002-01-01
SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.
Online Tools for Astronomy and Cosmochemistry
NASA Technical Reports Server (NTRS)
Meyer, B. S.
2005-01-01
Over the past year, the Webnucleo Group at Clemson University has been developing a web site with a number of interactive online tools for astronomy and cosmochemistry applications. The site uses SHP (Simplified Hypertext Preprocessor), which, because of its flexibility, allows us to embed almost any computer language into our web pages. For a description of SHP, please see http://www.joeldenny.com/ At our web site, an internet user may mine large and complex data sets, such as our stellar evolution models, and make graphs or tables of the results. The user may also run some of our detailed nuclear physics and astrophysics codes, such as our nuclear statistical equilibrium code, which is written in fortran and C. Again, the user may make graphs and tables and download the results.
Finite-element modeling of the human neurocranium under functional anatomical aspects.
Mall, G; Hubig, M; Koebke, J; Steinbuch, R
1997-08-01
Due to its functional significance the human skull plays an important role in biomechanical research. The present work describes a new Finite-Element model of the human neurocranium. The dry skull of a middle-aged woman served as a pattern. The model was developed using only the preprocessor (Mentat) of a commercial FE-system (Marc). Unlike that of other FE models of the human skull mentioned in the literature, the geometry in this model was designed according to functional anatomical findings. Functionally important morphological structures representing loci minoris resistentiae, especially the foramina and fissures of the skull base, were included in the model. The results of two linear static loadcase analyses in the region of the skull base underline the importance of modeling from the functional anatomical point of view.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Davis, R. C.
1992-01-01
A user's manual is presented for MacPASCO, which is an interactive, graphic, preprocessor for panel design. MacPASCO creates input for PASCO, an existing computer code for structural analysis and sizing of longitudinally stiffened composite panels. MacPASCO provides a graphical user interface which simplifies the specification of panel geometry and reduces user input errors. The user draws the initial structural geometry and reduces user input errors. The user draws the initial structural geometry on the computer screen, then uses a combination of graphic and text inputs to: refine the structural geometry; specify information required for analysis such as panel load and boundary conditions; and define design variables and constraints for minimum mass optimization. Only the use of MacPASCO is described, since the use of PASCO has been documented elsewhere.
Development of an algorithm for controlling a multilevel three-phase converter
NASA Astrophysics Data System (ADS)
Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat
2017-08-01
This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Example of an Intermediate Level Seating Area of a Multi-Level Car Complying With Window Location Requirements-§§ 238.113 and 238.114 2B Figure 2B to... Intermediate Level Seating Area of a Multi-Level Car Complying With Window Location Requirements—§§ 238.113 and...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Example of an Intermediate Level Seating Area of a Multi-Level Car Complying With Window Location Requirements-§§ 238.113 and 238.114 2A Figure 2A to... Intermediate Level Seating Area of a Multi-Level Car Complying With Window Location Requirements—§§ 238.113 and...
Hierarchical models of very large problems, dilemmas, prospects, and an agenda for the future
NASA Technical Reports Server (NTRS)
Richardson, J. M., Jr.
1975-01-01
Interdisciplinary approaches to the modeling of global problems are discussed in terms of multilevel cooperation. A multilevel regionalized model of the Lake Erie Basin is analyzed along with a multilevel regionalized world modeling project. Other topics discussed include: a stratified model of interacting region in a world system, and the application of the model to the world food crisis in south Asia. Recommended research for future development of integrated models is included.
Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P
2009-04-01
Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.
NASA Technical Reports Server (NTRS)
Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip; Cronk, Heather W.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert; Crisp, David;
2015-01-01
The retrieval of the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2 ) from satellite measurements of reflected sunlight in the near-infrared can be biased due to contamination by clouds and aerosols within the instrument's field of view (FOV). Therefore, accurate aerosol and cloud screening of soundings is required prior to their use in the computationally expensive XCO2 retrieval algorithm. Robust cloud screening methods have been an important focus of the retrieval algorithm team for the National Aeronautics and Space Administration (NASA) Orbiting Carbon Observatory-2 (OCO-2), which was successfully launched into orbit on July 2, 2014. Two distinct spectrally-based algorithms have been developed for the purpose of cloud clearing OCO-2 soundings. The A-Band Preprocessor (ABP) performs a retrieval of surface pressure using measurements in the 0.76 micron O2 A-band to distinguish changes in the expected photon path length. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) (IDP) algorithm is a non- scattering routine that operates on the O2 A-band as well as two CO2 absorption bands at 1.6 m (weak CO2 band) and 2.0 m (strong CO2 band) to provide band-dependent estimates of CO2 and H2O. Spectral ratios of retrieved CO2 and H2O identify measurements contaminated with cloud and scattering aerosols. Information from the two preprocessors is feed into a sounding selection tool to strategically down select from the order one million daily soundings collected by OCO-2 to a manageable number (order 10 to 20%) to be processed by the OCO-2 L2 XCO2 retrieval algorithm. Regional biases or errors in the selection of clear-sky soundings will introduce errors in the final retrieved XCO2 values, ultimately yielding errors in the flux inversion models used to determine global sources and sinks of CO2. In this work collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, and the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), aboard the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite, are used as a reference to access the accuracy and strengths and weaknesses of the OCO-2 screening algorithms. The combination of the ABP and IDP algorithms is shown to provide very robust and complimentary cloud filtering as compared to the results from MODIS and CALIOP. With idealized algorithm tuning to allow throughputs of 20-25%, correct classification of scenes, i.e., accuracies, are found to be ' 80-90% over several orbit repeat cycles in both the win ter and spring time for the three main viewing configurations of OCO-2; nadir-land, glint-land and glint-water. Investigation unveiled no major spatial or temporal dependencies, although slight differences in the seasonal data sets do exist and classification tends to be more problematic with increasing solar zenith angle and when surfaces are covered in snow and ice. An in depth analysis on both a simulated data set and real OCO-2 measurements against CALIOP highlight the strength of the ABP in identifying high, thin clouds while it often misses clouds near the surface even when the optical thickness is greater than 1. Fortunately, by combining the ABP with the IDP, the number of thick low clouds passing the preprocessors is partially mitigated.
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
General method to find the attractors of discrete dynamic models of biological systems.
Gan, Xiao; Albert, Réka
2018-04-01
Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.
NASA Astrophysics Data System (ADS)
Binh, Le Nguyen
2009-04-01
A geometrical and phasor representation technique is presented to illustrate the modulation of the lightwave carrier to generate quadrature amplitude modulated (QAM) signals. The modulation of the amplitude and phase of the lightwave carrier is implemented using only one dual-drive Mach-Zehnder interferometric modulator (MZIM) with the assistance of phasor techniques. Any multilevel modulation scheme can be generated, but we illustrate specifically, the multilevel amplitude and differential phase shift keying (MADPSK) signals. The driving voltage levels are estimated for driving the traveling wave electrodes of the modulator. Phasor diagrams are extensively used to demonstrate the effectiveness of modulation schemes. MATLAB Simulink models are formed to generate the multilevel modulation formats, transmission, and detection in optically amplified fiber communication systems. Transmission performance is obtained for the multilevel optical signals and proven to be equivalent or better than those of binary level with equivalent bit rate. Further, the resilience to nonlinear effects is much higher for MADPSK of 50% and 33% pulse width as compared to non-return-to-zero (NRZ) pulse shaping.
General method to find the attractors of discrete dynamic models of biological systems
NASA Astrophysics Data System (ADS)
Gan, Xiao; Albert, Réka
2018-04-01
Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.
Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L
2015-01-01
In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.
Quintiliani, Lisa M; DeBiasse, Michele A; Branco, Jamie M; Bhosrekar, Sarah Gees; Rorie, Jo-Anna L; Bowen, Deborah J
2014-11-01
Intervention programs that change environments have the potential for greater population impact on obesity compared to individual-level programs. We began a cluster randomized, multi-component multi-level intervention to improve weight, diet, and physical activity among low-socioeconomic status public housing residents. Here we describe the rationale, intervention design, and baseline survey data. After approaching 12 developments, ten were randomized to intervention (n=5) or assessment-only control (n=5). All residents in intervention developments are welcome to attend any intervention component: health screenings, mobile food bus, walking groups, cooking demonstrations, and a social media campaign; all of which are facilitated by community health workers who are residents trained in health outreach. To evaluate weight and behavioral outcomes, a subgroup of female residents and their daughters age 8-15 were recruited into an evaluation cohort. In total, 211 households completed the survey (RR=46.44%). Respondents were Latino (63%), Black (24%), and had ≤ high school education (64%). Respondents reported ≤2 servings of fruits & vegetables/day (62%), visiting fast food restaurants 1+ times/week (32%), and drinking soft drinks daily or more (27%). The only difference between randomized groups was race/ethnicity, with more Black residents in the intervention vs. control group (28% vs. 19%, p=0.0146). Among low-socioeconomic status urban public housing residents, we successfully recruited and randomized families into a multi-level intervention targeting obesity. If successful, this intervention model could be adopted in other public housing developments or entities that also employ community health workers, such as food assistance programs or hospitals. Copyright © 2014 Elsevier Inc. All rights reserved.
A scrutiny of heterogeneity at the TCE Source Area BioREmediation (SABRE) test site
NASA Astrophysics Data System (ADS)
Rivett, M.; Wealthall, G. P.; Mcmillan, L. A.; Zeeb, P.
2015-12-01
A scrutiny of heterogeneity at the UK's Source Area BioREmediation (SABRE) test site is presented to better understand how spatial heterogeneity in subsurface properties and process occurrence may constrain performance of enhanced in-situ bioremediation (EISB). The industrial site contained a 25 to 45 year old trichloroethene (TCE) dense non-aqueous phase liquid (DNAPL) that was exceptionally well monitored via a network of multilevel samplers and high resolution core sampling. Moreover, monitoring was conducted within a 3-sided sheet-pile cell that allowed a controlled streamtube of flow to be drawn through the source zone by an extraction well. We primarily focus on the longitudinal transect of monitoring along the length of the cell that provides a 200 groundwater point sample slice along the streamtube of flow through the DNAPL source zone. TCE dechlorination is shown to be significant throughout the cell domain, but spatially heterogeneous in occurrence and progress of dechlorination to lesser chlorinated ethenes - it is this heterogeneity in dechlorination that we primarily scrutinise. We illustrate the diagnostic use of the relative occurrence of TCE parent and daughter compounds to confirm: dechlorination in close proximity to DNAPL and enhanced during the bioremediation; persistent layers of DNAPL into which gradients of dechlorination products are evident; fast flowpaths through the source zone where dechlorination is less evident; and, the importance of underpinning flow regime understanding on EISB performance. Still, even with such spatial detail, there remains uncertainty over the dataset interpretation. These includes poor closure of mass balance along the cell length for the multilevel sampler based monitoring and points to needs to still understand lateral flows (even in the constrained cell), even greater spatial resolution of point monitoring and potentially, not easily proven, ethene degradation loss.
Buman, Matthew P; Mullane, Sarah L; Toledo, Meynard J; Rydell, Sarah A; Gaesser, Glenn A; Crespo, Noe C; Hannan, Peter; Feltes, Linda; Vuong, Brenna; Pereira, Mark A
2017-02-01
American workers spend 70-80% of their time at work being sedentary. Traditional approaches to increase moderate-vigorous physical activity (MVPA) may be perceived to be harmful to productivity. Approaches that target reductions in sedentary behavior and/or increases in standing or light-intensity physical activity [LPA] may not interfere with productivity and may be more feasible to achieve through small changes accumulated throughout the workday METHODS/DESIGN: This group randomized trial (i.e., cluster randomized trial) will test the relative efficacy of two sedentary behavior focused interventions in 24 worksites across two states (N=720 workers). The MOVE+ intervention is a multilevel individual, social, environmental, and organizational intervention targeting increases in light-intensity physical activity in the workplace. The STAND+ intervention is the MOVE+ intervention with the addition of the installation and use of sit-stand workstations to reduce sedentary behavior and enhance light-intensity physical activity opportunities. Our primary outcome will be objectively-measured changes in sedentary behavior and light-intensity physical activity over 12months, with additional process measures at 3months and longer-term sustainability outcomes at 24months. Our secondary outcomes will be a clustered cardiometabolic risk score (comprised of fasting glucose, insulin, triglycerides, HDL-cholesterol, and blood pressure), workplace productivity, and job satisfaction DISCUSSION: This study will determine the efficacy of a multi-level workplace intervention (including the use of a sit-stand workstation) to reduce sedentary behavior and increase LPA and concomitant impact on cardiometabolic health, workplace productivity, and satisfaction. ClinicalTrials.gov Identifier: NCT02566317 (date of registration: 10/1/2015). Copyright © 2016 Elsevier Inc. All rights reserved.
Hobin, Erin P; Leatherdale, Scott; Manske, Steve; Dubin, Joel A; Elliott, Susan; Veugelers, Paul
2013-05-01
This study examined differences in students' time spent in physical activity (PA) across secondary schools in rural, suburban, and urban environments and identified the environment-level factors associated with these between school differences in students' PA. Multilevel linear regression analyses were used to examine the environment- and student-level characteristics associated with time spent in PA among grades 9 to 12 students attending 76 secondary schools in Ontario, Canada, as part of the SHAPES-Ontario study. This approach was first conducted with the full data set testing for interactions between environment-level factors and school location. Then, school-location specific regression models were run separately. Statistically significant between-school variation was identified among students attending urban (σ(2) μ0 = 8959.63 [372.46]), suburban (σ(2) μ0 = 8918.75 [186.20]), and rural (σ(2) μ0 = 9403.17 [203.69]) schools, where school-level differences accounted for 4.0%, 2.0%, and 2.1% of the variability in students' time spent in PA, respectively. Students attending an urban or suburban school that provided another room for PA or was located within close proximity to a shopping mall or fast food outlet spent more time in PA. Students' time spent in PA varies by school location and some features of the school environment have a different impact on students' time spent in PA by school location. Developing a better understanding of the environment-level characteristics associated with students' time spent in PA by school location may help public health and planning experts to tailor school programs and policies to the needs of students in different locations. © 2013, American School Health Association.
Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang
2011-01-01
This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990
2016-01-01
Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness). Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes it suitable for partial-copy detection; that is, by processing only short segments of 1 second length. PMID:27861492
Kim, Yongjoo; Austin, S Bryn; Subramanian, S V; Thomas, Jennifer J; Eddy, Kamryn T; Franko, Debra L; Rodgers, Rachel F; Kawachi, Ichiro
2018-02-01
To investigate the prevalence and risk factors for disordered weight control behaviors (DWCB) in South Korean adolescents at multiple levels, including individual, family, school, and geographic area. We drew participants from the 11th Korea Youth Risk Behavior Web-based Survey, conducted in 2015, with 65,529 adolescents (31,687 girls, 33,842 boys) aged 12-18 years. DWCB was defined as engaging in any of the following behaviors for weight control over the past month: fasting, one-food diet (eating only one food over an extended period of time for weight control), vomiting, and taking laxatives/diuretics/unprescribed diet pills. Sex-stratified four-level multilevel logistic models examined potential predictors of DWCB, including age, body-mass index, puberty, perceived household economic status, parental education, living structure, school type and sex-composition, percentage of students participating in school nutrition programs, and urbanicity. Overall, 6.2% of Korean adolescents (8.9% of girls, 3.7% of boys) exhibited any DWCB. We found significant between-school variation among girls and boys and between-classroom variation among girls. Older age, overweight/obesity, pubertal maturity, high household economic status (vs. mid-range economic status), and vocational schooling (vs. general) were positively associated with DWCB among girls and boys. Low household economic status (vs. mid-range economic status), higher parental education, and coeducational schooling (vs. single-sex) were positively associated with DWCB among girls only. The findings suggest that DWCB are prevalent among Korean adolescents across age, sex, and socioeconomic status. Social contextual factors including school and familial environmental factors, as well as individual characteristics, should be considered when developing effective prevention strategies. © 2018 Wiley Periodicals, Inc.
Scalable algorithms for 3D extended MHD.
NASA Astrophysics Data System (ADS)
Chacon, Luis
2007-11-01
In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)
Suvak, Michael K; Walling, Sherry M; Iverson, Katherine M; Taft, Casey T; Resick, Patricia A
2009-12-01
Multilevel modeling is a powerful and flexible framework for analyzing nested data structures (e.g., repeated measures or longitudinal designs). The authors illustrate a series of multilevel regression procedures that can be used to elucidate the nature of the relationship between two variables across time. The goal is to help trauma researchers become more aware of the utility of multilevel modeling as a tool for increasing the field's understanding of posttraumatic adaptation. These procedures are demonstrated by examining the relationship between two posttraumatic symptoms, intrusion and avoidance, across five assessment points in a sample of rape and robbery survivors (n = 286). Results revealed that changes in intrusion were highly correlated with changes in avoidance over the 18-month posttrauma period.
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
TEMPEST in a gallimaufry: applying multilevel systems theory to person-in-context research.
Peck, Stephen C
2007-12-01
Terminological ambiguity and inattention to personal and contextual multilevel systems undermine personality, self, and identity theories. Hierarchical and heterarchical systems theories are used to describe contents and processes existing within and across three interrelated multilevel systems: levels of organization, representation, and integration. Materially nested levels of organization are used to distinguish persons from contexts and personal from social identity. Functionally nested levels of representation are used to distinguish personal identity from the sense of identity and symbolic (belief) from iconic (schema) systems. Levels of integration are hypothesized to unfold separately but interdependently across levels of representation. Multilevel system configurations clarify alternative conceptualizations of traits and contextualized identity. Methodological implications for measurement and analysis (e.g., integrating variable- and pattern-centered methods) are briefly described.
Mathematical model comparing of the multi-level economics systems
NASA Astrophysics Data System (ADS)
Brykalov, S. M.; Kryanev, A. V.
2017-12-01
The mathematical model (scheme) of a multi-level comparison of the economic system, characterized by the system of indices, is worked out. In the mathematical model of the multi-level comparison of the economic systems, the indicators of peer review and forecasting of the economic system under consideration can be used. The model can take into account the uncertainty in the estimated values of the parameters or expert estimations. The model uses the multi-criteria approach based on the Pareto solutions.
Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients
Kolev, Tzanio V.; Xu, Jinchao; Zhu, Yunrong
2015-08-23
In this study, we extend some of the multilevel convergence results obtained by Xu and Zhu, to the case of second order linear reaction-diffusion equations. Specifically, we consider the multilevel preconditioners for solving the linear systems arising from the linear finite element approximation of the problem, where both diffusion and reaction coefficients are piecewise-constant functions. We discuss in detail the influence of both the discontinuous reaction and diffusion coefficients to the performance of the classical BPX and multigrid V-cycle preconditioner.
Pulse design for multilevel systems by utilizing Lie transforms
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-03-01
We put forward a scheme to design pulses to manipulate multilevel systems with Lie transforms. A formula to reverse construct a control Hamiltonian is given and is applied in pulse design in the three- and four-level systems as examples. To demonstrate the validity of the scheme, we perform numerical simulations, which show the population transfers for cascaded three-level and N -type four-level Rydberg atoms can be completed successfully with high fidelities. Therefore, the scheme may benefit quantum information tasks based on multilevel systems.
A Goal Programming Model for the Siting of Multilevel EMS Systems.
1980-03-01
Management," unpublished Ph.D. thesis, University of Texas, Austin, Texas, 1971. -23- (11) Daskin , M. and E. Stern, " A Multiobjective Set Covering...GOAL PROGRAM4MING MODEL FOR THE SITING OF MULTILEVEL EMS SYSTE-ETC(U) UNM1AR 80 A CHARNES, J E STORBECK N000iA-75-C-569 WICLASSIFIED CCS-366 N...366 A GOAL PROGRAMMING MODEL FOR THE SITING OF MULTILEVEL EMS SYSTEMS by A . Charnes J. Storbeck March 1980 This project was partially supported by
A multilevel control approach for a modular structured space platform
NASA Technical Reports Server (NTRS)
Chichester, F. D.; Borelli, M. T.
1981-01-01
A three axis mathematical representation of a modular assembled space platform consisting of interconnected discrete masses, including a deployable truss module, was derived for digital computer simulation. The platform attitude control system as developed to provide multilevel control utilizing the Gauss-Seidel second level formulation along with an extended form of linear quadratic regulator techniques. The objectives of the multilevel control are to decouple the space platform's spatial axes and to accommodate the modification of the platform's configuration for each of the decoupled axes.
Adams-Based Rover Terramechanics and Mobility Simulator - ARTEMIS
NASA Technical Reports Server (NTRS)
Trease, Brian P.; Lindeman, Randel A.; Arvidson, Raymond E.; Bennett, Keith; VanDyke, Lauren P.; Zhou, Feng; Iagnemma, Karl; Senatore, Carmine
2013-01-01
The Mars Exploration Rovers (MERs), Spirit and Opportunity, far exceeded their original drive distance expectations and have traveled, at the time of this reporting, a combined 29 kilometers across the surface of Mars. The Rover Sequencing and Visualization Program (RSVP), the current program used to plan drives for MERs, is only a kinematic simulator of rover movement. Therefore, rover response to various terrains and soil types cannot be modeled. Although sandbox experiments attempt to model rover-terrain interaction, these experiments are time-intensive and costly, and they cannot be used within the tactical timeline of rover driving. Imaging techniques and hazard avoidance features on MER help to prevent the rover from traveling over dangerous terrains, but mobility issues have shown that these methods are not always sufficient. ARTEMIS, a dynamic modeling tool for MER, allows planned drives to be simulated before commands are sent to the rover. The deformable soils component of this model allows rover-terrain interactions to be simulated to determine if a particular drive path would take the rover over terrain that would induce hazardous levels of slip or sink. When used in the rover drive planning process, dynamic modeling reduces the likelihood of future mobility issues because high-risk areas could be identified before drive commands are sent to the rover, and drives planned over these areas could be rerouted. The ARTEMIS software consists of several components. These include a preprocessor, Digital Elevation Models (DEMs), Adams rover model, wheel and soil parameter files, MSC Adams GUI (commercial), MSC Adams dynamics solver (commercial), terramechanics subroutines (FORTRAN), a contact detection engine, a soil modification engine, and output DEMs of deformed soil. The preprocessor is used to define the terrain (from a DEM) and define the soil parameters for the terrain file. The Adams rover model is placed in this terrain. Wheel and soil parameter files can be altered in the respective text files. The rover model and terrain are viewed in Adams View, the GUI for ARTEMIS. The Adams dynamics solver calls terramechanics subroutines in FORTRAN containing the Bekker-Wong equations.
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
Analyzing average and conditional effects with multigroup multilevel structural equation models
Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf
2014-01-01
Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information
Wang, Xiaohong; Wang, Lizhi
2017-01-01
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.
Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi
2017-09-15
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.
NASA Astrophysics Data System (ADS)
Taissariyeva, K.; Issembergenov, N.; Dzhobalaeva, G.; Usembaeva, S.
2016-09-01
The given paper considers the multilevel 6 kW-power transistor inverter at supply by 12 accumulators for transformation of solar battery energy to the electric power. At the output of the multilevel transistor inverter, it is possible to receive voltage close to a sinusoidal form. The main objective of this inverter is transformation of solar energy to the electric power of industrial frequency. The analysis of the received output curves of voltage on harmonicity has been carried out. In this paper it is set forth the developed scheme of the multilevel transistor inverter (DC-to-ac converter) which allows receiving at the output the voltage close to sinusoidal form, as well as to regulation of the output voltage level. In the paper, the results of computer modeling and experimental studies are presented.
Relating Measurement Invariance, Cross-Level Invariance, and Multilevel Reliability.
Jak, Suzanne; Jorgensen, Terrence D
2017-01-01
Data often have a nested, multilevel structure, for example when data are collected from children in classrooms. This kind of data complicate the evaluation of reliability and measurement invariance, because several properties can be evaluated at both the individual level and the cluster level, as well as across levels. For example, cross-level invariance implies equal factor loadings across levels, which is needed to give latent variables at the two levels a similar interpretation. Reliability at a specific level refers to the ratio of true score variance over total variance at that level. This paper aims to shine light on the relation between reliability, cross-level invariance, and strong factorial invariance across clusters in multilevel data. Specifically, we will illustrate how strong factorial invariance across clusters implies cross-level invariance and perfect reliability at the between level in multilevel factor models.
Analyzing Multiple Outcomes in Clinical Research Using Multivariate Multilevel Models
Baldwin, Scott A.; Imel, Zac E.; Braithwaite, Scott R.; Atkins, David C.
2014-01-01
Objective Multilevel models have become a standard data analysis approach in intervention research. Although the vast majority of intervention studies involve multiple outcome measures, few studies use multivariate analysis methods. The authors discuss multivariate extensions to the multilevel model that can be used by psychotherapy researchers. Method and Results Using simulated longitudinal treatment data, the authors show how multivariate models extend common univariate growth models and how the multivariate model can be used to examine multivariate hypotheses involving fixed effects (e.g., does the size of the treatment effect differ across outcomes?) and random effects (e.g., is change in one outcome related to change in the other?). An online supplemental appendix provides annotated computer code and simulated example data for implementing a multivariate model. Conclusions Multivariate multilevel models are flexible, powerful models that can enhance clinical research. PMID:24491071
Multilevel and Community-Level Interventions with Native Americans: Challenges and Opportunities.
Blue Bird Jernigan, Valarie; D'Amico, Elizabeth J; Duran, Bonnie; Buchwald, Dedra
2018-06-02
Multilevel and community-level interventions that target the social determinants of health and ultimately health disparities are seldom conducted in Native American communities. To contextualize the importance of multilevel and community-level interventions, major contributors to and causes of health disparities in Native communities are highlighted. Among the many documented socioeconomic factors influencing health are poverty, low educational attainment, and lack of insurance. Well-recognized health disparities include obesity, diabetes, and hypertension. Selected challenges of implementing community-level and multilevel interventions in Native communities are summarized such as the shortage of high-quality population health data and validated measurement tools. To address the lack of multilevel and community-level interventions, the National Institutes of Health created the Intervention Research to Improve Native American Health (IRINAH) program which solicits proposals that develop, adapt, and test strategies to address these challenges and create interventions appropriate for Native populations. A discussion of the strategies that four of the IRINAH grantees are implementing underscores the importance of community-based participatory policy work, the development of new partnerships, and reconnection with cultural traditions. Based on the work of the nearly 20 IRINAH grantees, ameliorating the complex social determinants of health disparities among Native people will require (1) support for community-level and multilevel interventions that examine contemporary and historical factors that shape current conditions; (2) sustainability plans; (3) forefronting the most challenging issues; (4) financial resources and time to collaborate with tribal leaders; and (5) a solid evidence base.
Customized binary and multi-level HfO2-x-based memristors tuned by oxidation conditions.
He, Weifan; Sun, Huajun; Zhou, Yaxiong; Lu, Ke; Xue, Kanhao; Miao, Xiangshui
2017-08-30
The memristor is a promising candidate for the next generation non-volatile memory, especially based on HfO 2-x , given its compatibility with advanced CMOS technologies. Although various resistive transitions were reported independently, customized binary and multi-level memristors in unified HfO 2-x material have not been studied. Here we report Pt/HfO 2-x /Ti memristors with double memristive modes, forming-free and low operation voltage, which were tuned by oxidation conditions of HfO 2-x films. As O/Hf ratios of HfO 2-x films increase, the forming voltages, SET voltages, and R off /R on windows increase regularly while their resistive transitions undergo from gradually to sharply in I/V sweep. Two memristors with typical resistive transitions were studied to customize binary and multi-level memristive modes, respectively. For binary mode, high-speed switching with 10 3 pulses (10 ns) and retention test at 85 °C (>10 4 s) were achieved. For multi-level mode, the 12-levels stable resistance states were confirmed by ongoing multi-window switching (ranging from 10 ns to 1 μs and completing 10 cycles of each pulse). Our customized binary and multi-level HfO 2-x -based memristors show high-speed switching, multi-level storage and excellent stability, which can be separately applied to logic computing and neuromorphic computing, further suitable for in-memory computing chip when deposition atmosphere may be fine-tuned.
NASA Astrophysics Data System (ADS)
Ghoudelbourk, Sihem.; Dib, D.; Meghni, B.; Zouli, M.
2017-02-01
The paper deals with the multilevel converters control strategy for photovoltaic system integrated in distribution grids. The objective of the proposed work is to design multilevel inverters for solar energy applications so as to reduce the Total Harmonic Distortion (THD) and to improve the power quality. The multilevel inverter power structure plays a vital role in every aspect of the power system. It is easier to produce a high-power, high-voltage inverter with the multilevel structure. The topologies of multilevel inverter have several advantages such as high output voltage, lower total harmonic distortion (THD) and reduction of voltage ratings of the power semiconductor switching devices. The proposed control strategy ensures an implementation of selective harmonic elimination (SHE) modulation for eleven levels. SHE is a very important and efficient strategy of eliminating selected harmonics by judicious selection of the firing angles of the inverter. Harmonics elimination technique eliminates the need of the expensive low pass filters in the system. Previous research considered that constant and equal DC sources with invariant behavior; however, this research extends earlier work to include variant DC sources, which are typical of lead-acid batteries when used in system PV. This Study also investigates methods to minimize the total harmonic distortion of the synthesized multilevel waveform and to help balance the battery voltage. The harmonic elimination method was used to eliminate selected lower dominant harmonics resulting from the inverter switching action.
Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images
NASA Astrophysics Data System (ADS)
Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.
2012-02-01
Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Manual control models of industrial management
NASA Technical Reports Server (NTRS)
Crossman, E. R. F. W.
1972-01-01
The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
Social Capital and Health: A Review of Prospective Multilevel Studies
Murayama, Hiroshi; Fujiwara, Yoshinori; Kawachi, Ichiro
2012-01-01
Background This article presents an overview of the concept of social capital, reviews prospective multilevel analytic studies of the association between social capital and health, and discusses intervention strategies that enhance social capital. Methods We conducted a systematic search of published peer-reviewed literature on the PubMed database and categorized studies according to health outcome. Results We identified 13 articles that satisfied the inclusion criteria for the review. In general, both individual social capital and area/workplace social capital had positive effects on health outcomes, regardless of study design, setting, follow-up period, or type of health outcome. Prospective studies that used a multilevel approach were mainly conducted in Western countries. Although we identified some cross-sectional multilevel studies that were conducted in Asian countries, including Japan, no prospective studies have been conducted in Asia. Conclusions Prospective evidence from multilevel analytic studies of the effect of social capital on health is very limited at present. If epidemiologic findings on the association between social capital and health are to be put to practical use, we must gather additional evidence and explore the feasibility of interventions that build social capital as a means of promoting health. PMID:22447212
Dual deep modeling: multi-level modeling with dual potencies and its formalization in F-Logic.
Neumayr, Bernd; Schuetz, Christoph G; Jeusfeld, Manfred A; Schrefl, Michael
2018-01-01
An enterprise database contains a global, integrated, and consistent representation of a company's data. Multi-level modeling facilitates the definition and maintenance of such an integrated conceptual data model in a dynamic environment of changing data requirements of diverse applications. Multi-level models transcend the traditional separation of class and object with clabjects as the central modeling primitive, which allows for a more flexible and natural representation of many real-world use cases. In deep instantiation, the number of instantiation levels of a clabject or property is indicated by a single potency. Dual deep modeling (DDM) differentiates between source potency and target potency of a property or association and supports the flexible instantiation and refinement of the property by statements connecting clabjects at different modeling levels. DDM comes with multiple generalization of clabjects, subsetting/specialization of properties, and multi-level cardinality constraints. Examples are presented using a UML-style notation for DDM together with UML class and object diagrams for the representation of two-level user views derived from the multi-level model. Syntax and semantics of DDM are formalized and implemented in F-Logic, supporting the modeler with integrity checks and rich query facilities.
Martin, Angela; Karanika-Murray, Maria; Biron, Caroline; Sanderson, Kristy
2016-08-01
Although there have been several calls for incorporating multiple levels of analysis in employee health and well-being research, studies examining the interplay between individual, workgroup, organizational and broader societal factors in relation to employee mental health outcomes remain an exception rather than the norm. At the same time, organizational intervention research and practice also tends to be limited by a single-level focus, omitting potentially important influences at multiple levels of analysis. The aims of this conceptual paper are to help progress our understanding of work-related determinants of employee mental health by the following: (1) providing a rationale for routine multilevel assessment of the psychosocial work environment; (2) discussing how a multilevel perspective can improve related organizational interventions; and (3) highlighting key theoretical and methodological considerations relevant to these aims. We present five recommendations for future research, relating to using appropriate multilevel research designs, justifying group-level constructs, developing group-level measures, expanding investigations to the organizational level and developing multilevel approaches to intervention design, implementation and evaluation. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd.
Social capital and health: a review of prospective multilevel studies.
Murayama, Hiroshi; Fujiwara, Yoshinori; Kawachi, Ichiro
2012-01-01
This article presents an overview of the concept of social capital, reviews prospective multilevel analytic studies of the association between social capital and health, and discusses intervention strategies that enhance social capital. We conducted a systematic search of published peer-reviewed literature on the PubMed database and categorized studies according to health outcome. We identified 13 articles that satisfied the inclusion criteria for the review. In general, both individual social capital and area/workplace social capital had positive effects on health outcomes, regardless of study design, setting, follow-up period, or type of health outcome. Prospective studies that used a multilevel approach were mainly conducted in Western countries. Although we identified some cross-sectional multilevel studies that were conducted in Asian countries, including Japan, no prospective studies have been conducted in Asia. Prospective evidence from multilevel analytic studies of the effect of social capital on health is very limited at present. If epidemiologic findings on the association between social capital and health are to be put to practical use, we must gather additional evidence and explore the feasibility of interventions that build social capital as a means of promoting health.
Addosooki, Ahmad I; El-deen, Mohamed Alam
2015-01-01
Purpose A retrospective study to compare the radiologic and clinical outcomes of 2 different anterior approaches, multilevel anterior cervical discectomy with fusion (ACDF) using autologus ticortical bone graft versus anterior cervical corpectomy with fusion (ACCF) using free vascularized fibular graft (FVFG) for the management of cervical spondylotic myelopathy(CSM). Methods A total of 15 patients who underwent ACDF or ACCF using FVFG for multilevel CSM were divided into two groups. Group A (n = 7) underwent ACDF and group B (n = 8) ACCF. Clinical outcomes using Japanese Orthopaedic Association (JOA) score, perioperative parameters including operation time and hospital stay, radiological parameters including fusion rate and cervical lordosis, and complications were compared. Results Both group A and group B demonstrated significant increases in JOA scores. Patients who underwent ACDF experienced significantly shorter operation times and hospital stay. Both groups showed significant increases in postoperative cervical lordosis and achieved the same fusion rate (100 %). No major complications were encountered in both groups. Conclusion Both ACDF and ACCF using FVFG provide satisfactory clinical outcomes and fusion rates for multilevel CSM. However, multilevel ACDF is associated with better radiologic parameters, shorter hospital stay and shorter operative times. PMID:26767152
Squeezed light from conventionally pumped multi-level lasers
NASA Technical Reports Server (NTRS)
Ralph, T. C.; Savage, C. M.
1992-01-01
We have calculated the amplitude squeezing in the output of several conventionally pumped multi-level lasers. We present results which show that standard laser models can produce significantly squeezed outputs in certain parameter ranges.
Vassallo, Rebecca; Durrant, Gabriele B; Smith, Peter W F; Goldstein, Harvey
2015-01-01
The paper investigates two different multilevel approaches, the multilevel cross-classified and the multiple-membership models, for the analysis of interviewer effects on wave non-response in longitudinal surveys. The models proposed incorporate both interviewer and area effects to account for the non-hierarchical structure, the influence of potentially more than one interviewer across waves and possible confounding of area and interviewer effects arising from the non-random allocation of interviewers across areas. The methods are compared by using a data set: the UK Family and Children Survey. PMID:25598587
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
Crush Analyses of Multi-Level Equipment
DOT National Transportation Integrated Search
2006-11-06
Non-linear large deformation crush analyses were conducted on a multi-level cab car typical of those in operation by the Southern California Regional Rail Authority (SCRRA) in California. The motivation for these analyses was a collision, which occur...
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains
NASA Astrophysics Data System (ADS)
Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao
2017-11-01
A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.
The multi-level perspective analysis: Indonesia geothermal energy transition study
NASA Astrophysics Data System (ADS)
Wisaksono, A.; Murphy, J.; Sharp, J. H.; Younger, P. L.
2018-01-01
The study adopts a multi-level perspective in technology transition to analyse how the transition process in the development of geothermal energy in Indonesia is able to compete against the incumbent fossil-fuelled energy sources. Three levels of multi-level perspective are socio-technical landscape (ST-landscape), socio-technical regime (ST-regime) and niche innovations in Indonesia geothermal development. The identification, mapping and analysis of the dynamic relationship between each level are the important pillars of the multi-level perspective framework. The analysis considers the set of rules, actors and controversies that may arise in the technological transition process. The identified geothermal resource risks are the basis of the emerging geothermal technological innovations in Indonesian geothermal. The analysis of this study reveals the transition pathway, which yields a forecast for the Indonesian geothermal technology transition in the form of scenarios and probable impacts.
Health Reforms as Examples of Multilevel Interventions in Cancer Care
Fennell, Mary L.; Devers, Kelly J.
2012-01-01
To increase access and improve system quality and efficiency, President Obama signed the Patient Protection and Affordable Care Act with sweeping changes to the nation’s health-care system. Although not intended to be specific to cancer, the act's implementation will profoundly impact cancer care. Its components will influence multiple levels of the health-care environment including states, communities, health-care organizations, and individuals seeking care. To illustrate these influences, two reforms are considered: 1) accountable care organizations and 2) insurance-based reforms to gather evidence about effectiveness. We discuss these reforms using three facets of multilevel interventions: 1) their intended and unintended consequences, 2) the importance of timing, and 3) their implications for cancer. The success of complex health reforms requires understanding the scientific basis and evidence for carrying out such multilevel interventions. Conversely and equally important, successful implementation of multilevel interventions depends on understanding the political setting and goals of health-care reform. PMID:22623600
Health reforms as examples of multilevel interventions in cancer care.
Flood, Ann B; Fennell, Mary L; Devers, Kelly J
2012-05-01
To increase access and improve system quality and efficiency, President Obama signed the Patient Protection and Affordable Care Act with sweeping changes to the nation's health-care system. Although not intended to be specific to cancer, the act's implementation will profoundly impact cancer care. Its components will influence multiple levels of the health-care environment including states, communities, health-care organizations, and individuals seeking care. To illustrate these influences, two reforms are considered: 1) accountable care organizations and 2) insurance-based reforms to gather evidence about effectiveness. We discuss these reforms using three facets of multilevel interventions: 1) their intended and unintended consequences, 2) the importance of timing, and 3) their implications for cancer. The success of complex health reforms requires understanding the scientific basis and evidence for carrying out such multilevel interventions. Conversely and equally important, successful implementation of multilevel interventions depends on understanding the political setting and goals of health-care reform.
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
On multi-level thinking and scientific understanding
NASA Astrophysics Data System (ADS)
McIntyre, Michael Edgeworth
2017-10-01
Professor Duzheng YE's name has been familiar to me ever since my postdoctoral years at MIT with Professors Jule CHARNEY and Norman PHILLIPS, back in the late 1960s. I had the enormous pleasure of meeting Professor YE personally in 1992 in Beijing. His concern to promote the very best science and to use it well, and his thinking on multi-level orderly human activities, reminds me not only of the communication skills we need as scientists but also of the multi-level nature of science itself. Here I want to say something (a) about what science is; (b) about why multi-level thinking—and taking more than one viewpoint—is so important for scientific as well as for other forms of understanding; and (c) about what is meant, at a deep level, by "scientific understanding" and trying to communicate it, not only with lay persons but also across professional disciplines. I hope that Professor YE would approve.
Multilevel animal societies can emerge from cultural transmission
Cantor, Maurício; Shoemaker, Lauren G.; Cabral, Reniel B.; Flores, César O.; Varga, Melinda; Whitehead, Hal
2015-01-01
Multilevel societies, containing hierarchically nested social levels, are remarkable social structures whose origins are unclear. The social relationships of sperm whales are organized in a multilevel society with an upper level composed of clans of individuals communicating using similar patterns of clicks (codas). Using agent-based models informed by an 18-year empirical study, we show that clans are unlikely products of stochastic processes (genetic or cultural drift) but likely originate from cultural transmission via biased social learning of codas. Distinct clusters of individuals with similar acoustic repertoires, mirroring the empirical clans, emerge when whales learn preferentially the most common codas (conformism) from behaviourally similar individuals (homophily). Cultural transmission seems key in the partitioning of sperm whales into sympatric clans. These findings suggest that processes similar to those that generate complex human cultures could not only be at play in non-human societies but also create multilevel social structures in the wild. PMID:26348688
A New Family of Multilevel Grid Connected Inverters Based on Packed U Cell Topology.
Pakdel, Majid; Jalilzadeh, Saeid
2017-09-29
In this paper a novel packed U cell (PUC) based multilevel grid connected inverter is proposed. Unlike the U cell arrangement which consists of two power switches and one capacitor, in the proposed converter topology a lower DC power supply from renewable energy resources such as photovoltaic arrays (PV) is used as a base power source. The proposed topology offers higher efficiency and lower cost using a small number of power switches and a lower DC power source which is supplied from renewable energy resources. Other capacitor voltages are extracted from the base lower DC power source using isolated DC-DC power converters. The operation principle of proposed transformerless multilevel grid connected inverter is analyzed theoretically. Operation of the proposed multilevel grid connected inverter is verified through simulation studies. An experimental prototype using STM32F407 discovery controller board is performed to verify the simulation results.
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
Generalize aerodynamic coefficient table storage, checkout and interpolation for aircraft simulation
NASA Technical Reports Server (NTRS)
Neuman, F.; Warner, N.
1973-01-01
The set of programs described has been used for rapidly introducing, checking out and very efficiently using aerodynamic tables in complex aircraft simulations on the IBM 360. The preprocessor program reads in tables with different names and dimensions and stores then on disc storage according to the specified dimensions. The tables are read in from IBM cards in a format which is convenient to reduce the data from the original graphs. During table processing, new auxiliary tables are generated which are required for table cataloging and for efficient interpolation. In addition, DIMENSION statements for the tables as well as READ statements are punched so that they may be used in other programs for readout of the data from disc without chance of programming errors. A quick data checking graphical output for all tables is provided in a separate program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, A.
1986-03-10
Supercomputing software is moving into high gear, spurred by the rapid spread of supercomputers into new applications. The critical challenge is how to develop tools that will make it easier for programmers to write applications that take advantage of vectorizing in the classical supercomputer and the parallelism that is emerging in supercomputers and minisupercomputers. Writing parallel software is a challenge that every programmer must face because parallel architectures are springing up across the range of computing. Cray is developing a host of tools for programmers. Tools to support multitasking (in supercomputer parlance, multitasking means dividing up a single program tomore » run on multiple processors) are high on Cray's agenda. On tap for multitasking is Premult, dubbed a microtasking tool. As a preprocessor for Cray's CFT77 FORTRAN compiler, Premult will provide fine-grain multitasking.« less
Summary of workshop on the application of VLSI for robotic sensing
NASA Technical Reports Server (NTRS)
Brooks, T.; Wilcox, B.
1984-01-01
It was one of the objectives of the considered workshop to identify near, mid, and far-term applications of VLSI for robotic sensing and sensor data preprocessing. The workshop was also to indicate areas in which VLSI technology can provide immediate and future payoffs. A third objective is related to the promotion of dialog and collaborative efforts between research communities, industry, and government. The workshop was held on March 24-25, 1983. Conclusions and recommendations are discussed. Attention is given to the need for a pixel correction chip, an image sensor with 10,000 dynamic range, VLSI enhanced architectures, the need for a high-density serpentine memory, an LSI-tactile sensing program, an analog-signal preprocessor chip, a smart strain gage, a protective proximity envelope, a VLSI-proximity sensor program, a robot-net chip, and aspects of silicon micromechanics.
Modeling of rolling element bearing mechanics. Theoretical manual
NASA Technical Reports Server (NTRS)
Merchant, David H.; Greenhill, Lyn M.
1994-01-01
This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
The minitrack tracking function description, volume 1
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.; Mango, S. A.; Roettcher, C. A.; Watters, D. L.
1973-01-01
The treatment of tracking data by the Minitrack system is described from the transmission of the nominal 136-MHz radio beacon energy from a satellite and the reception of this signal by the interferometer network through the ultimate derivation of the direction cosines (the angular coordinates of the vector from the tracking station to the spacecraft) as a function of time. Descriptions of some of the lesser-known functions operating on the system, such as the computer preprocessing program, are included. A large part of the report is devoted to the preprocessor, which provides for the data compression, smoothing, calibration correction, and ambiguity resolution of the raw interferometer phase tracking measurements teletyped from each of the worldwide Minitrack tracking stations to the central computer facility at Goddard Space Flight Center. An extensive bibliography of Minitrack hardware and theory is presented.
A thermal analysis of a spirally wound battery using a simple mathematical model
NASA Technical Reports Server (NTRS)
Evans, T. I.; White, R. E.
1989-01-01
A two-dimensional thermal model for spirally wound batteries has been developed. The governing equation of the model is the energy balance. Convective and insulated boundary conditions are used, and the equations are solved using a finite element code called TOPAZ2D. The finite element mesh is generated using a preprocessor to TOPAZ2D called MAZE. The model is used to estimate temperature profiles within a spirally wound D-size cell. The model is applied to the lithium/thionyl chloride cell because of the thermal management problems that this cell exhibits. Simplified one-dimensional models are presented that can be used to predict best and worst temperature profiles. The two-dimensional model is used to predict the regions of maximum temperature within the spirally wound cell. Normal discharge as well as thermal runaway conditions are investigated.
Coffee and green tea consumption is associated with insulin resistance in Japanese adults.
Pham, Ngoc Minh; Nanri, Akiko; Kochi, Takeshi; Kuwahara, Keisuke; Tsuruoka, Hiroko; Kurotani, Kayo; Akter, Shamima; Kabe, Isamu; Sato, Masao; Hayabuchi, Hitomi; Mizoue, Tetsuya
2014-03-01
Higher coffee and green tea consumption has been suggested to decrease risk of type 2 diabetes, but their roles in insulin resistance (IR) and insulin secretion remain unclear. This study examined the association between habitual consumption of these beverages and markers of glucose metabolism in a Japanese working population. Participants were 1440 Japanese employees (1151 men and 289 women) aged 18-69years. Consumption of coffee and green tea was ascertained via a validated brief diet history questionnaire. Multilevel linear regression was used to estimate means (95% confidence intervals) of fasting insulin, fasting plasma glucose, homeostatic model assessment of IR (HOMA-IR), homeostatic model assessment of β-cell function (HOMA-β) and glycated hemoglobin (HbA1c) with adjustment for potential confounding variables. Coffee consumption was significantly, inversely associated with HOMA-IR (P for trend=0.03), and the association appeared to be confined to overweight subjects (BMI≥25kg/m(2)) (P for trend=0.01, P for interaction=0.08). Unexpectedly, green tea consumption was positively associated with HOMA-IR (P for trend=0.02), though there was no dose-response relationship among daily consumers of green tea. Neither coffee nor green tea consumption was associated with HOMA-β and HbA1c. Our findings indicate that coffee consumption may be associated with decreased IR, but not with insulin secretion. The positive association between green tea consumption and IR warrants further investigation. Copyright © 2014 Elsevier Inc. All rights reserved.
Bourgault, Annette M; Smith, Sherry
2004-01-01
Multi-levelled critical care competency statements were developed based on the levels of novice to expert (Benner, 1984). These competency statements provide a framework for the development of knowledge and skills specific to critical care. The purpose of this tool is to guide personal development in critical care, facilitating the assessment of individual learning needs. Competency levels are attained through the completion of performance criteria. Multi-levelled competency statements define clear expectations for the new orientee, in addition to providing a framework for the advancement of the intermediate and experienced nurse.
NASA Technical Reports Server (NTRS)
Simon, M. K.
1974-01-01
Multilevel amplitude-shift-keying (MASK) and quadrature amplitude-shift-keying (QASK) as signaling techniques for multilevel digital communications systems, and the problem of providing symbol synchronization in the receivers of such systems are discussed. A technique is presented for extracting symbol sync from an MASK or QASK signal. The scheme is a generalization of the data transition tracking loop used in PSK systems. The performance of the loop was analyzed in terms of its mean-squared jitter and its effects on the data detection process in MASK and QASK systems.
Using multilevel models to quantify heterogeneity in resource selection
Wagner, Tyler; Diefenbach, Duane R.; Christensen, Sonja; Norton, Andrew S.
2011-01-01
Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection.
Cherian, Jacob; Sayama, Christina M; Adesina, Adekunle M; Lam, Sandi K; Luerssen, Thomas G; Jea, Andrew
2014-09-01
Vertebral hemangiomas are common benign vascular tumors of the spine. It is very rare for these lesions to symptomatically compress neural elements. If spinal cord compression does occur, it usually involves only a single level. Multilevel vertebral hemangiomas causing symptomatic spinal cord compression have never been reported in the pediatric population to the best of our knowledge. We report the case of a 15-year-old boy presenting with progressive paraparesis due to thoracic spinal cord compression from a multilevel thoracic hemangioma (T5-T10) with epidural extension. Because of his progressive neurological deficit, he was initially treated with urgent multilevel decompressive laminectomies from T4 to T11. This was to be followed by radiotherapy for residual tumor, but the patient was unfortunately lost to follow-up. He re-presented 3 years later with recurrent paraparesis and progressive disease. This was treated with urgent radiotherapy with good response. As of 6 months follow-up, he has made an excellent neurological recovery. In this report, we present the first case of a child with multilevel vertebral hemangiomas causing symptomatic spinal cord compression and review the literature to detail the pathophysiology, management, and treatment of other cases of spinal cord compression by vertebral hemangiomas.
Towards rewritable multilevel optical data storage in single nanocrystals.
Riesen, Nicolas; Pan, Xuanzhao; Badek, Kate; Ruan, Yinlan; Monro, Tanya M; Zhao, Jiangbo; Ebendorff-Heidepriem, Heike; Riesen, Hans
2018-04-30
Novel approaches for digital data storage are imperative, as storage capacities are drastically being outpaced by the exponential growth in data generation. Optical data storage represents the most promising alternative to traditional magnetic and solid-state data storage. In this paper, a novel and energy efficient approach to optical data storage using rare-earth ion doped inorganic insulators is demonstrated. In particular, the nanocrystalline alkaline earth halide BaFCl:Sm is shown to provide great potential for multilevel optical data storage. Proof-of-concept demonstrations reveal for the first time that these phosphors could be used for rewritable, multilevel optical data storage on the physical dimensions of a single nanocrystal. Multilevel information storage is based on the very efficient and reversible conversion of Sm 3+ to Sm 2+ ions upon exposure to UV-C light. The stored information is then read-out using confocal optics by employing the photoluminescence of the Sm 2+ ions in the nanocrystals, with the signal strength depending on the UV-C fluence used during the write step. The latter serves as the mechanism for multilevel data storage in the individual nanocrystals, as demonstrated in this paper. This data storage platform has the potential to be extended to 2D and 3D memory for storage densities that could potentially approach petabyte/cm 3 levels.
How to compare cross-lagged associations in a multilevel autoregressive model.
Schuurman, Noémi K; Ferrer, Emilio; de Boer-Sonnenschein, Mieke; Hamaker, Ellen L
2016-06-01
By modeling variables over time it is possible to investigate the Granger-causal cross-lagged associations between variables. By comparing the standardized cross-lagged coefficients, the relative strength of these associations can be evaluated in order to determine important driving forces in the dynamic system. The aim of this study was twofold: first, to illustrate the added value of a multilevel multivariate autoregressive modeling approach for investigating these associations over more traditional techniques; and second, to discuss how the coefficients of the multilevel autoregressive model should be standardized for comparing the strength of the cross-lagged associations. The hierarchical structure of multilevel multivariate autoregressive models complicates standardization, because subject-based statistics or group-based statistics can be used to standardize the coefficients, and each method may result in different conclusions. We argue that in order to make a meaningful comparison of the strength of the cross-lagged associations, the coefficients should be standardized within persons. We further illustrate the bivariate multilevel autoregressive model and the standardization of the coefficients, and we show that disregarding individual differences in dynamics can prove misleading, by means of an empirical example on experienced competence and exhaustion in persons diagnosed with burnout. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A collision dynamics model of a multi-level train
DOT National Transportation Integrated Search
2006-11-05
In train collisions, multi-level rail passenger vehicles can deform in modes that are different from the behavior of single level cars. The deformation in single level cars usually occurs at the front end during a collision. In one particular inciden...
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
Multilevel non-contiguous spinal injuries: incidence and patterns based on whole spine MRI.
Kanna, Rishi Mugesh; Gaike, Chandrasekar V; Mahesh, Anupama; Shetty, Ajoy Prasad; Rajasekaran, S
2016-04-01
Multi-level non-contiguous spinal injuries are not uncommon and their incidence varies from 1.6 to 77% depending on the type of imaging modality used. Delayed diagnosis and missed spinal injuries in non-contiguous spine fractures have been frequently described which can result in significant pain, deformity and neurological deficit. The efficacy of whole spine MRI in detecting asymptomatic significant vertebral fractures is not known. Consecutive spinal injury patients treated between 2011 and 2013 were retrospectively evaluated based on clinical and radiographic records. Patients' demographics, mode of injury, presence of associated injuries, clinical symptoms and the presence of neurological deficit were studied. Radiographs of the fractured region and whole spine MRI were evaluated for the presence of multi-level injuries. Among 484 patients, 95 (19.62%) patients had multilevel injuries including 86 (17.76%) with non-contiguous injuries. Five common patterns of non-contiguous spinal injuries were observed. Pattern I: cervical and thoracic--29.1%, Pattern II: thoracolumbar and lumbosacral--22.1%, Pattern III: thoracic and thoracolumbar--12.8 %, Pattern IV: cervical and thoracolumbar--9.1% and Pattern V: lumbosacral and associated injuries--9.0 %. The incidence of intra-regional non-contiguous injuries was 17.4%. Whole spine MRI scan detected 24 (28.6%) missed secondary injuries of which 5 were unstable. The incidence of multilevel non-contiguous spine injury using whole spine MRI imaging is 17.76%. Five different patterns of multi-level non-contiguous injuries were found with the most common pattern being the cervical and thoracic level injuries. The incidence of unstable injuries can be as high as 21% of missed secondary injuries.
Thamkunanon, Verasak
2011-08-01
Single Event Multilevel soft tissue surgery in spastic diplegic children also was effective for improving ambulatory function obviously as multilevel bone and soft tissue surgery. Just muscle and tendon surgery seem to be enough for better lever arm dysfunction of the lower extremity. It has safe, simple and rapid recovery. Gross Motor Functional Classification System (GMFCS) improvement after single event multilevel soft tissue surgery had been observed in these study groups of patients. Retrospective review in 93 spastic diplegic children who were more than 3 years old, had ability to understand communication, at least leaned sitting and one-hand gross function ability had been operated on by single event multilevel soft tissue surgery. GMFCS was assessed at the time of pre-operation and 6-12 months after operation. Analyzing GMFCS change was performed by statistics. Average 7 site surgery per one patient, 84% GMFCS level improvement and 16% GMFCS level non-improvement were reported. Nine cases (9.7%) were improved 2 level of GMFCS and 74% improved 1 level. GMFCS level compared between pre- and post surgery had changed by the significant statistic (p < 0.001). The average GMFCS level improvement for all groups was 0.93 level. The average age in the improved group (75 months old) was less than the non-improved group (92 month old), was a trend difference in statistic (p = 0.032). Single Event Multilevel Soft tissue surgery was effective in improving the GMFCS level average 1 level. It changed ambulatory function of spastic diplegic CP children obviously, immediately and safely. Younger age might get more benefit than older children.
Wu, Jian; Jin, Yongming; Zhang, Jun; Shao, Haiyu; Yang, Di; Chen, Jinping
2014-12-01
This was a prospective, randomized controlled clinical study. To determine the efficacy of absorbable gelatin sponge in reducing blood loss, as well as shortening the length of hospital stay in patients undergoing multilevel posterior lumbar spinal surgery. Absorbable gelatin sponge is reported to decrease postoperative drain output and the length of hospital stay after multilevel posterior cervical spine surgery. However, there is a dearth of literature on prospective study of the efficacy of absorbable gelatin sponge in reducing postoperative blood loss, as well as shortening the length of hospital stay in patients undergoing multilevel posterior lumbar spinal surgery. A total of 82 consecutive patients who underwent multilevel posterior lumbar fusion or posterior lumbar interbody fusion between June 2011 and June 2012 were prospectively randomized into one of the 2 groups according to whether absorbable gelatin sponge for postoperative blood management was used or not. Demographic distribution, total drain output, blood transfusion rate, the length of stay, the number of readmissions, and postoperative complications were analyzed. Total drain output averaged 173 mL in the study group and 392 mL in the control group (P=0.000). Perioperative allogeneic blood transfusion rate were lower in the Gelfoam group (34.1% vs. 58.5%, P=0.046); moreover, length of stay in patients with the use of absorbable gelatin sponge (12.58 d) was significantly shorter (P=0.009) than the patients in the control group (14.46 d). No patient developed adverse reactions attributable to the absorbable gelatin sponge. Application of absorbable gelatin sponge at the end of multilevel posterior lumbar fusion can significantly decrease postoperative drain output and length of hospital stay.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Sánchez-Vizcaíno, Fernando; Perez, Andrés; Martínez-López, Beatriz; Sánchez-Vizcaíno, José Manuel
2012-08-01
Trade of animals and animal products imposes an uncertain and variable risk for exotic animal diseases introduction into importing countries. Risk analysis provides importing countries with an objective, transparent, and internationally accepted method for assessing that risk. Over the last decades, European Union countries have conducted probabilistic risk assessments quite frequently to quantify the risk for rare animal diseases introduction into their territories. Most probabilistic animal health risk assessments have been typically classified into one-level and multilevel binomial models. One-level models are more simple than multilevel models because they assume that animals or products originate from one single population. However, it is unknown whether such simplification may result in substantially different results compared to those obtained through the use of multilevel models. Here, data used on a probabilistic multilevel binomial model formulated to assess the risk for highly pathogenic avian influenza introduction into Spain were reanalyzed using a one-level binomial model and their outcomes were compared. An alternative ordinal model is also proposed here, which makes use of simpler assumptions and less information compared to those required by traditional one-level and multilevel approaches. Results suggest that, at least under certain circumstances, results of the one-level and ordinal approaches are similar to those obtained using multilevel models. Consequently, we argue that, when data are insufficient to run traditional probabilistic models, the ordinal approach presented here may be a suitable alternative to rank exporting countries in terms of the risk that they impose for the spread of rare animal diseases into disease-free countries. © 2012 Society for Risk Analysis.
Chakrabarti, Somsubhra; Ginnaram, Sreekanth; Jana, Surajit; Wu, Zong-Yi; Singh, Kanishk; Roy, Anisha; Kumar, Pankaj; Maikap, Siddheswar; Qiu, Jian-Tai; Cheng, Hsin-Ming; Tsai, Ling-Na; Chang, Ya-Ling; Mahapatra, Rajat; Yang, Jer-Ren
2017-07-05
Negative voltage modulated multi-level resistive switching with quantum conductance during staircase-type RESET and its transport characteristics in Cr/BaTiO x /TiN structure have been investigated for the first time. The as-deposited amorphous BaTiO x film has been confirmed by high-resolution transmission electron microscopy. X-ray photo-electron spectroscopy shows different oxidation states of Ba in the switching material, which is responsible for tunable more than 10 resistance states by varying negative stop voltage owing to slow decay value of RESET slope (217.39 mV/decade). Quantum conductance phenomenon has been observed in staircase RESET cycle of the memory devices. By inspecting the oxidation states of Ba + and Ba 2+ through measuring H 2 O 2 with a low concentration of 1 nM in electrolyte/BaTiO x /SiO 2 /p-Si structure, the switching mechanism of each HRS level as well as the multi-level phenomenon has been explained by gradual dissolution of oxygen vacancy filament. Along with negative stop voltage modulated multi-level, current compliance dependent multi-level has also been demonstrated and resistance ratio up to 2000 has been achieved even for a thin (<5 nm) switching material. By considering oxidation-reduction of the conducting filaments, the current-voltage switching curve has been simulated as well. Hence, multi-level resistive switching of Cr/BaTiO x /TiN structure implies the promising applications in high dense, multistate non-volatile memories in near future.
Determinants of Academic Entrepreneurship Behavior: A Multilevel Model
ERIC Educational Resources Information Center
Llano, Joseph Anthony
2010-01-01
It is well established that universities encourage the acquisition and dissemination of new knowledge among university community members and beyond. However, what is less well understood is how universities encourage entrepreneurial (opportunity discovery, evaluation, and exploiting) behavior. This research investigated a multilevel model of the…
Attachment, Autonomy, and Emotional Reliance: A Multilevel Model
ERIC Educational Resources Information Center
Lynch, Martin F.
2013-01-01
This article reports a test of a multilevel model investigating how attachment security and autonomy contribute to emotional reliance, or the willingness to seek interpersonal support. Participants ("N" = 247) completed online measures of attachment, autonomy, emotional reliance, and vitality with respect to several everyday…
Phantom Effects in Multilevel Compositional Analysis: Problems and Solutions
ERIC Educational Resources Information Center
Pokropek, Artur
2015-01-01
This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…
GROUND WATER MONITORING AND SAMPLING: MULTI-LEVEL VERSUS TRADITIONAL METHODS – WHAT’S WHAT?
Recent studies have been conducted to evaluate different sampling techniques for determining VOC concentrations in groundwater. Samples were obtained using multi-level and traditional sampling techniques in three monitoring wells at the Raymark Superfund site in Stratford, CT. Ve...
Multilevel Assessments of Science Standards
ERIC Educational Resources Information Center
Quellmalz, Edys S.; Timms, Michael J.; Silberglitt, Matt D.
2011-01-01
The Multilevel Assessment of Science Standards (MASS) project is creating a new generation of technology-enhanced formative assessments that bring the best formative assessment practices into classrooms to transform what, how, when, and where science learning is assessed. The project is investigating the feasibility, utility, technical quality,…
Min, Ari; Park, Chang Gi; Scott, Linda D
2016-05-23
Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.
Xiao, Bailu; Hang, Lijun; Mei, Jun; ...
2014-09-04
This paper presents a modular cascaded H-bridge multilevel photovoltaic (PV) inverter for single- or three-phase grid-connected applications. The modular cascaded multilevel topology helps to improve the efficiency and flexibility of PV systems. To realize better utilization of PV modules and maximize the solar energy extraction, a distributed maximum power point tracking (MPPT) control scheme is applied to both single-phase and three-phase multilevel inverters, which allows the independent control of each dc-link voltage. For three-phase grid-connected applications, PV mismatches may introduce unbalanced supplied power, leading to unbalanced grid current. To solve this issue, a control scheme with modulation compensation is alsomore » proposed. An experimental three-phase 7-level cascaded H-bridge inverter has been built utilizing 9 H-bridge modules (3 modules per phase). Each H-bridge module is connected to a 185 W solar panel. Simulation and experimental results are presented to verify the feasibility of the proposed approach.« less
Developing soft skill training for salespersons to increase total sales
NASA Astrophysics Data System (ADS)
Mardatillah, A.; Budiman, I.; Tarigan, U. P. P.; Sembiring, A. C.; Hendi
2018-04-01
This research was conducted in the multilevel marketing industry. Unprofessional salespersons behavior and responsibility can ruin the image of the multilevel marketing industry and distrust to the multilevel marketing industry. This leads to decreased company revenue due to lack of public interest in multilevel marketing products. Seeing these conditions, researcher develop training programs to improve the competence of salespersons in making sales. It was done by looking at factors that affect the level of salespersons sales. The research analyzes several factors that influence the salesperson’s sales level: presentation skills, questioning ability, adaptability, technical knowledge, self-control, interaction involvement, sales environment, and intrapersonal skills. Through the analysis of these factors with One Sample T-Test and Multiple Linear Regression methods, researchers design a training program for salespersons to increase their sales. The developed training for salespersons is basic training and special training and before training was given, salespersons need to be assessed for the effectivity and efficiency reasons.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
A Multi-level Fuzzy Evaluation Method for Smart Distribution Network Based on Entropy Weight
NASA Astrophysics Data System (ADS)
Li, Jianfang; Song, Xiaohui; Gao, Fei; Zhang, Yu
2017-05-01
Smart distribution network is considered as the future trend of distribution network. In order to comprehensive evaluate smart distribution construction level and give guidance to the practice of smart distribution construction, a multi-level fuzzy evaluation method based on entropy weight is proposed. Firstly, focus on both the conventional characteristics of distribution network and new characteristics of smart distribution network such as self-healing and interaction, a multi-level evaluation index system which contains power supply capability, power quality, economy, reliability and interaction is established. Then, a combination weighting method based on Delphi method and entropy weight method is put forward, which take into account not only the importance of the evaluation index in the experts’ subjective view, but also the objective and different information from the index values. Thirdly, a multi-level evaluation method based on fuzzy theory is put forward. Lastly, an example is conducted based on the statistical data of some cites’ distribution network and the evaluation method is proved effective and rational.
Weeks, Margaret R; Convey, Mark; Dickson-Gomez, Julia; Li, Jianghong; Radda, Kim; Martinez, Maria; Robles, Eduardo
2009-06-01
Peer delivered, social oriented HIV prevention intervention designs are increasingly popular for addressing broader contexts of health risk beyond a focus on individual factors. Such interventions have the potential to affect multiple social levels of risk and change, including at the individual, network, and community levels, and reflect social ecological principles of interaction across social levels over time. The iterative and feedback dynamic generated by this multi-level effect increases the likelihood for sustained health improvement initiated by those trained to deliver the peer intervention. The Risk Avoidance Partnership (RAP), conducted with heroin and cocaine/crack users in Hartford, Connecticut, exemplified this intervention design and illustrated the multi-level effect on drug users' risk and harm reduction at the individual level, the social network level, and the larger community level. Implications of the RAP program for designing effective prevention programs and for analyzing long-term change to reduce HIV transmission among high-risk groups are discussed from this ecological and multi-level intervention perspective.
Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung
NASA Astrophysics Data System (ADS)
Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani
2017-03-01
Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.
Cascaded H-bridge multilevel inverter for renewable energy generation
NASA Astrophysics Data System (ADS)
Pandey, Ravikant; Nath Tripathi, Ravi; Hanamoto, Tsuyoshi
2016-04-01
In this paper cascaded H-bridge multilevel inverter (CHBMLI) has been investigated for the application of renewable energy generation. Energy sources like solar, wind, hydro, biomass or combination of these can be manipulated to obtain alternative sources for renewable energy generation. These renewable energy sources have different electrical characteristics like DC or AC level so it is challenging to use generated power by connecting to grid or load directly. The renewable energy source require specific power electronics converter as an interface for conditioning generated power .The multilevel inverter can be utilized for renewable energy sources in two different modes, the power generation mode (stand-alone mode), and compensator mode (statcom). The performance of the multilevel inverter has been compared with two level inverter. In power generation mode CHBMLI supplies the active and reactive power required by the different loads. For operation in compensator mode the indirect current control based on synchronous reference frame theory (SRFT) ensures the grid operating in unity power factor and compensate harmonics and reactive power.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Andrade, Fernando H.
2014-01-01
A growing body of literature has linked substance use and academic performance exploring substance use as a predictor of academic performance or vice versa. This study uses a different approach conceptualizing substance use and academic performance as parallel outcomes and exploring two topics: its multilevel-longitudinal association and school contextual effects on both outcomes. Using multilevel Confirmatory Factor Analysis and multilevel-longitudinal analyses, the empirical estimates relied on 7843 students nested in 114 schools (Add Health study). The main finding suggests that the correlation between substance use and academic performance was positive at the school level in contraposition to the negative relationship at the individual level. Additional findings suggest a positive effect of a school risk factor on substance use and a positive effect of academic pressure on academic performance. These findings represent a contribution to our understanding of how schools could affect the relationship between academic performance and substance use. PMID:25057764
Mills, Melinda; Begall, Katia
2010-03-01
Comparative research on the preferred sex of children in Western societies has generally focused on women only and ignored the role of gender equity and the need for children's economic support in old age. A multilevel analysis extends existing research by examining, for both men and women and across 24 European countries, the effect of the preferred sex-composition of offspring on whether parents have or intend to have a third child. Using the European Social Survey (2004/5), a multilevel (random coefficient) ordered logit regression of that intention (N = 3,323) and a binary logistic multilevel model of the transition to a third child (N = 6,502) demonstrate the presence of a mixed-sex preference. In countries with a high risk of poverty in old age, a preference for sons is found, particularly for men. In societies where there is lower gender equity, both men and women have a significant preference for boys.