Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.
2009-01-01
Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.
NASA Astrophysics Data System (ADS)
Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.
2009-04-01
The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2
NASA Technical Reports Server (NTRS)
Lee, F. C.
1980-01-01
A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.
Mühlfeld, Christian; Ochs, Matthias
2013-08-01
Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.
Matching Learning Style Preferences with Suitable Delivery Methods on Textile Design Programmes
ERIC Educational Resources Information Center
Sayer, Kate; Studd, Rachel
2006-01-01
Textile design is a subject that encompasses both design and technology; aesthetically pleasing patterns and forms must be set within technical parameters to create successful fabrics. When considering education methods in design programmes, identifying the most relevant learning approach is key to creating future successes. Yet are the most…
Tunable Collagen I Hydrogels for Engineered Physiological Tissue Micro-Environments
Antoine, Elizabeth E.; Vlachos, Pavlos P.; Rylander, Marissa N.
2015-01-01
Collagen I hydrogels are commonly used to mimic the extracellular matrix (ECM) for tissue engineering applications. However, the ability to design collagen I hydrogels similar to the properties of physiological tissues has been elusive. This is primarily due to the lack of quantitative correlations between multiple fabrication parameters and resulting material properties. This study aims to enable informed design and fabrication of collagen hydrogels in order to reliably and reproducibly mimic a variety of soft tissues. We developed empirical predictive models relating fabrication parameters with material and transport properties. These models were obtained through extensive experimental characterization of these properties, which include compression modulus, pore and fiber diameter, and diffusivity. Fabrication parameters were varied within biologically relevant ranges and included collagen concentration, polymerization pH, and polymerization temperature. The data obtained from this study elucidates previously unknown fabrication-property relationships, while the resulting equations facilitate informed a priori design of collagen hydrogels with prescribed properties. By enabling hydrogel fabrication by design, this study has the potential to greatly enhance the utility and relevance of collagen hydrogels in order to develop physiological tissue microenvironments for a wide range of tissue engineering applications. PMID:25822731
Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data
ERIC Educational Resources Information Center
Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2015-01-01
A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…
Electromagnetic sunscreen model: design of experiments on particle specifications.
Lécureux, Marie; Deumié, Carole; Enoch, Stefan; Sergent, Michelle
2015-10-01
We report a numerical study on sunscreen design and optimization. Thanks to the combined use of electromagnetic modeling and design of experiments, we are able to screen the most relevant parameters of mineral filters and to optimize sunscreens. Several electromagnetic modeling methods are used depending on the type of particles, density of particles, etc. Both the sun protection factor (SPF) and the UVB/UVA ratio are considered. We show that the design of experiments' model should include interactions between materials and other parameters. We conclude that the material of the particles is a key parameter for the SPF and the UVB/UVA ratio. Among the materials considered, none is optimal for both. The SPF is also highly dependent on the size of the particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Madison E.
Opacity is a critical parameter in the simulation of radiation transport in systems such as inertial con nement fusion capsules and stars. The resolution of current disagreements between solar models and helioseismological observations would bene t from experimental validation of theoretical opacity models. Overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.
Women's Later Life Career Development: Looking through the Lens of the Kaleidoscope Career Model
ERIC Educational Resources Information Center
August, Rachel A.
2011-01-01
This study explores the relevance of the Kaleidoscope Career Model (KCM) to women's later life career development. Qualitative interview data were gathered from 14 women in both the "truly" late career and bridge employment periods using a longitudinal design. The relevance of authenticity, balance, and challenge--central parameters in the KCM--is…
NASA Astrophysics Data System (ADS)
Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den
2016-08-01
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.
Anand, T S; Sujatha, S
2017-08-01
Polycentric knees for transfemoral prostheses have a variety of geometries, but a survey of literature shows that there are few ways of comparing their performance. Our objective was to present a method for performance comparison of polycentric knee geometries and design a new geometry. In this work, we define parameters to compare various commercially available prosthetic knees in terms of their stability, toe clearance, maximum flexion, and so on and optimize the parameters to obtain a new knee design. We use the defined parameters and optimization to design a new knee geometry that provides the greater stability and toe clearance necessary to navigate uneven terrain which is typically encountered in developing countries. Several commercial knees were compared based on the defined parameters to determine their suitability for uneven terrain. A new knee was designed based on optimization of these parameters. Preliminary user testing indicates that the new knee is very stable and easy to use. The methodology can be used for better knee selection and design of more customized knee geometries. Clinical relevance The method provides a tool to aid in the selection and design of polycentric knees for transfemoral prostheses.
1993-04-01
not to be construed as an official Department of the Army position unless so designated by other authorizing documents. REPORT DOCUMENTATION PAGE...parameter sensitivity studies, and test procedure design . An experimental system providing reaL data on the parametters relevant to the calculations has been...experimental program was designed to exploit as much of the existing capabilities of the Ventilation Kinetics group as possible while keeping in mind
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Parameter design considerations for an oscillator IR-FEL
NASA Astrophysics Data System (ADS)
Jia, Qi-Ka
2017-01-01
An infrared oscillator FEL user facility will be built at the National Synchrotron Radiation Laboratory at in Hefei, China. In this paper, the parameter design of the oscillator FEL is discussed, and some original relevant approaches and expressions are presented. Analytic formulae are used to estimate the optical field gain and saturation power for the preliminary design. By considering both physical and technical constraints, the relation of the deflection parameter K to the undulator period is analyzed. This helps us to determine the ranges of the magnetic pole gap, the electron energy and the radiation wavelength. The relations and design of the optical resonator parameters are analyzed. Using dimensionless quantities, the interdependences between the radii of curvature of the resonator mirror and the various parameters of the optical resonator are clearly demonstrated. The effect of the parallel-plate waveguide is analyzed for the far-infrared oscillator FEL. The condition of the necessity of using a waveguide and the modified filling factor in the case of the waveguide are given, respectively. Supported by National Nature Science Foundation of China (21327901, 11375199)
Characterisation of the physico-mechanical parameters of MSW.
Stoltz, Guillaume; Gourc, Jean-Pierre; Oxarango, Laurent
2010-01-01
Following the basics of soil mechanics, the physico-mechanical behaviour of municipal solid waste (MSW) can be defined through constitutive relationships which are expressed with respect to three physical parameters: the dry density, the porosity and the gravimetric liquid content. In order to take into account the complexity of MSW (grain size distribution and heterogeneity larger than for conventional soils), a special oedometer was designed to carry out laboratory experiments. This apparatus allowed a coupled measurement of physical parameters for MSW settlement under stress. The studied material was a typical sample of fresh MSW from a French landfill. The relevant physical parameters were measured using a gas pycnometer. Moreover, the compressibility of MSW was studied with respect to the initial gravimetric liquid content. Proposed methods to assess the set of three physical parameters allow a relevant understanding of the physico-mechanical behaviour of MSW under compression, specifically, the evolution of the limit liquid content. The present method can be extended to any type of MSW. 2010 Elsevier Ltd. All rights reserved.
Nanoparticle Superlattice Engineering with DNA
NASA Astrophysics Data System (ADS)
Macfarlane, Robert J.; Lee, Byeongdu; Jones, Matthew R.; Harris, Nadine; Schatz, George C.; Mirkin, Chad A.
2011-10-01
A current limitation in nanoparticle superlattice engineering is that the identities of the particles being assembled often determine the structures that can be synthesized. Therefore, specific crystallographic symmetries or lattice parameters can only be achieved using specific nanoparticles as building blocks (and vice versa). We present six design rules that can be used to deliberately prepare nine distinct colloidal crystal structures, with control over lattice parameters on the 25- to 150-nanometer length scale. These design rules outline a strategy to independently adjust each of the relevant crystallographic parameters, including particle size (5 to 60 nanometers), periodicity, and interparticle distance. As such, this work represents an advance in synthesizing tailorable macroscale architectures comprising nanoscale materials in a predictable fashion.
Ambulatory instrumentation suitable for long-term monitoring of cattle health.
Schoenig, S A; Hildreth, T S; Nagl, L; Erickson, H; Spire, M; Andresen, D; Warren, S
2004-01-01
The benefits of real-time health diagnoses of cattle are potentially tremendous. Early detection of transmissible disease, whether from natural or terrorist events, could help to avoid huge financial losses in the agriculture industry while also improving meat quality. This work discusses physiological and behavioral parameters relevant to cattle state-of-health assessment. These parameters, along with a potentially harsh monitoring environment, drive a set of design considerations that must be addressed when building systems to acquire long-term, real-time measurements in the field. A prototype system is presented that supports the measurement of suitable physiologic parameters and begins to address the design constraints for continuous state-of-health determination in free-roaming cattle.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
Rational Design of Glucose-Responsive Insulin Using Pharmacokinetic Modeling.
Bakh, Naveed A; Bisker, Gili; Lee, Michael A; Gong, Xun; Strano, Michael S
2017-11-01
A glucose responsive insulin (GRI) is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. In this work, an approach to mathematically model the relevant parameter space for effective GRIs is induced, and design rules for linking GRI performance to therapeutic benefit are developed. Well-developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI are used to determine the desired kinetic parameters and dosing for optimal glycemic control. The model examines a subcutaneous dose of GRI with kinetic parameters in an optimal range that results in successful glycemic control within prescribed constraints over a 24 h period. Additionally, it is demonstrated that the modeling approach can find GRI parameters that enable stable glucose levels that persist through a skipped meal. The results provide a framework for exploring the parameter space of GRIs, potentially without extensive, iterative in vivo animal testing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reference clock parameters for digital communications systems applications
NASA Technical Reports Server (NTRS)
Kartaschoff, P.
1981-01-01
The basic parameters relevant to the design of network timing systems describe the random and systematic time departures of the system elements, i.e., master (or reference) clocks, transmission links, and other clocks controlled over the links. The quantitative relations between these parameters were established and illustrated by means of numerical examples based on available measured data. The examples were limited to a simple PLL control system but the analysis can eventually be applied to more sophisticated systems at the cost of increased computational effort.
Scarduelli, Lucia; Giacchini, Roberto; Parenti, Paolo; Migliorati, Sonia; Di Brisco, Agnese Maria; Vighi, Marco
2017-11-01
Biomarkers are widely used in ecotoxicology as indicators of exposure to toxicants. However, their ability to provide ecologically relevant information remains controversial. One of the major problems is understanding whether the measured responses are determined by stress factors or lie within the natural variability range. In a previous work, the natural variability of enzymatic levels in invertebrates sampled in pristine rivers was proven to be relevant across both space and time. In the present study, the experimental design was improved by considering different life stages of the selected taxa and by measuring more environmental parameters. The experimental design considered sampling sites in 2 different rivers, 8 sampling dates covering the whole seasonal cycle, 4 species from 3 different taxonomic groups (Plecoptera, Perla grandis; Ephemeroptera, Baetis alpinus and Epeorus alpicula; Tricoptera, Hydropsyche pellucidula), different life stages for each species, and 4 enzymes (acetylcholinesterase, glutathione S-transferase, alkaline phosphatase, and catalase). Biomarker levels were related to environmental (physicochemical) parameters to verify any kind of dependence. Data were statistically elaborated using hierarchical multilevel Bayesian models. Natural variability was found to be relevant across both space and time. The results of the present study proved that care should be paid when interpreting biomarker results. Further research is needed to better understand the dependence of the natural variability on environmental parameters. Environ Toxicol Chem 2017;36:3158-3167. © 2017 SETAC. © 2017 SETAC.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Balancing novelty with confined chemical space in modern drug discovery.
Medina-Franco, José L; Martinez-Mayorga, Karina; Meurice, Nathalie
2014-02-01
The concept of chemical space has broad applications in drug discovery. In response to the needs of drug discovery campaigns, different approaches are followed to efficiently populate, mine and select relevant chemical spaces that overlap with biologically relevant chemical spaces. This paper reviews major trends in current drug discovery and their impact on the mining and population of chemical space. We also survey different approaches to develop screening libraries with confined chemical spaces balancing physicochemical properties. In this context, the confinement is guided by criteria that can be divided in two broad categories: i) library design focused on a relevant therapeutic target or disease and ii) library design focused on the chemistry or a desired molecular function. The design and development of chemical libraries should be associated with the specific purpose of the library and the project goals. The high complexity of drug discovery and the inherent imperfection of individual experimental and computational technologies prompt the integration of complementary library design and screening approaches to expedite the identification of new and better drugs. Library design approaches including diversity-oriented synthesis, biological-oriented synthesis or combinatorial library design, to name a few, and the design of focused libraries driven by target/disease, chemical structure or molecular function are more efficient if they are guided by multi-parameter optimization. In this context, consideration of pharmaceutically relevant properties is essential for balancing novelty with chemical space in drug discovery.
Code of Federal Regulations, 2011 CFR
2011-01-01
... transmission line may be designed to serve additional residential development. The environmental impacts of... attainment within the confines of relevant constraints. The test of practicability, therefore, depends upon..., economic, legal, social and technological parameters. This test, however, is not limited by the temporary...
The geomagnetically trapped radiation environment: A radiological point of view
NASA Technical Reports Server (NTRS)
Holly, F. E.
1972-01-01
The regions of naturally occurring, geomagnetically trapped radiation are briefly reviewed in terms of physical parameters such as; particle types, fluxes, spectrums, and spatial distributions. The major emphasis is placed upon a description of this environment in terms of the radiobiologically relevant parameters of absorbed dose and dose-rate and a discussion of the radiological implications in terms of the possible impact on space vehicle design and mission planning.
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance
Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy
2011-01-01
The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021
NASA Technical Reports Server (NTRS)
Halldane, J. F.
1972-01-01
Technology is considered as a culture for changing a physical world and technology assessment questions the inherent cultural capability to modify power and material in support of living organisms. A comprehensive goal-parameter-synthesis-criterion specification is presented as a basis for a rational assessment of technology. The thesis queries the purpose of the assessed problems, the factors considered, the relationships between factors, and the values assigned those factors to accomplish the appropriate purpose. Stationary and sequential evaluation of enviro-organismic systems are delegated to the responsible personalities involved in design; from promoter/designer through contractor to occupant. Discussion includes design goals derived from organismic factors, definitions of human responses which establish viable criteria and relevant correlation models, linking stimulus parameters, and parallel problem-discipline centered design organization. A consistent concept of impedance, as a degradation in the performance of a specified parameter, is introduced to overcome the arbitrary inoperative connotations of terms like noise, discomfort, and glare. Applications of the evaluative specification are illustrated through design problems related to auditory impedance and sound distribution.
Sensor module design and forward and inverse kinematics analysis of 6-DOF sorting transferring robot
NASA Astrophysics Data System (ADS)
Zhou, Huiying; Lin, Jiajian; Liu, Lei; Tao, Meng
2017-09-01
To meet the demand of high strength express sorting, it is significant to design a robot with multiple degrees of freedom that can sort and transfer. This paper uses infrared sensor, color sensor and pressure sensor to receive external information, combine the plan of motion path in advance and the feedback information from the sensors, then write relevant program. In accordance with these, we can design a 6-DOF robot that can realize multi-angle seizing. In order to obtain characteristics of forward and inverse kinematics, this paper describes the coordinate directions and pose estimation by the D-H parameter method and closed solution. On the basis of the solution of forward and inverse kinematics, geometric parameters of links and link parameters are optimized in terms of application requirements. In this way, this robot can identify route, sort and transfer.
Adaptive Transcutaneous Power Transfer to Implantable Devices: A State of the Art Review
Bocan, Kara N.; Sejdić, Ervin
2016-01-01
Wireless energy transfer is a broad research area that has recently become applicable to implantable medical devices. Wireless powering of and communication with implanted devices is possible through wireless transcutaneous energy transfer. However, designing wireless transcutaneous systems is complicated due to the variability of the environment. The focus of this review is on strategies to sense and adapt to environmental variations in wireless transcutaneous systems. Adaptive systems provide the ability to maintain performance in the face of both unpredictability (variation from expected parameters) and variability (changes over time). Current strategies in adaptive (or tunable) systems include sensing relevant metrics to evaluate the function of the system in its environment and adjusting control parameters according to sensed values through the use of tunable components. Some challenges of applying adaptive designs to implantable devices are challenges common to all implantable devices, including size and power reduction on the implant, efficiency of power transfer and safety related to energy absorption in tissue. Challenges specifically associated with adaptation include choosing relevant and accessible parameters to sense and adjust, minimizing the tuning time and complexity of control, utilizing feedback from the implanted device and coordinating adaptation at the transmitter and receiver. PMID:26999154
Adaptive Transcutaneous Power Transfer to Implantable Devices: A State of the Art Review.
Bocan, Kara N; Sejdić, Ervin
2016-03-18
Wireless energy transfer is a broad research area that has recently become applicable to implantable medical devices. Wireless powering of and communication with implanted devices is possible through wireless transcutaneous energy transfer. However, designing wireless transcutaneous systems is complicated due to the variability of the environment. The focus of this review is on strategies to sense and adapt to environmental variations in wireless transcutaneous systems. Adaptive systems provide the ability to maintain performance in the face of both unpredictability (variation from expected parameters) and variability (changes over time). Current strategies in adaptive (or tunable) systems include sensing relevant metrics to evaluate the function of the system in its environment and adjusting control parameters according to sensed values through the use of tunable components. Some challenges of applying adaptive designs to implantable devices are challenges common to all implantable devices, including size and power reduction on the implant, efficiency of power transfer and safety related to energy absorption in tissue. Challenges specifically associated with adaptation include choosing relevant and accessible parameters to sense and adjust, minimizing the tuning time and complexity of control, utilizing feedback from the implanted device and coordinating adaptation at the transmitter and receiver.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
A design for a ground-based data management system
NASA Technical Reports Server (NTRS)
Lambird, Barbara A.; Lavine, David
1988-01-01
An initial design for a ground-based data management system which includes intelligent data abstraction and cataloging is described. The large quantity of data on some current and future NASA missions leads to significant problems in providing scientists with quick access to relevant data. Human screening of data for potential relevance to a particular study is time-consuming and costly. Intelligent databases can provide automatic screening when given relevent scientific parameters and constraints. The data management system would provide, at a minimum, information of availability of the range of data, the type available, specific time periods covered together with data quality information, and related sources of data. The system would inform the user about the primary types of screening, analysis, and methods of presentation available to the user. The system would then aid the user with performing the desired tasks, in such a way that the user need only specify the scientific parameters and objectives, and not worry about specific details for running a particular program. The design contains modules for data abstraction, catalog plan abstraction, a user-friendly interface, and expert systems for data handling, data evaluation, and application analysis. The emphasis is on developing general facilities for data representation, description, analysis, and presentation that will be easily used by scientists directly, thus bypassing the knowledge acquisition bottleneck. Expert system technology is used for many different aspects of the data management system, including the direct user interface, the interface to the data analysis routines, and the analysis of instrument status.
Westine, Carl D; Spybrook, Jessaca; Taylor, Joseph A
2013-12-01
Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163-.172, and reading, .099-.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Science educational researchers should utilize design parameters derived from science achievement outcomes. © The Author(s) 2014.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
NASA Technical Reports Server (NTRS)
Toft, Mark R.
1994-01-01
This is a follow-up of studies of the NASA standard 50 AH cell presented at the NASA battery workshop each of the last two years. This is a dynamic study. Data trends continue to be developed and analyzed for their utility in judging NiCd performance. The trends and parameters presented here may bear relevance to many designs of conventional NiCd batteries, not just the 50 AH and 60 AH sizes.
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
Computational fluid dynamics (CFD) in the design of a water-jet-drive system
NASA Technical Reports Server (NTRS)
Garcia, Roberto
1994-01-01
NASA/Marshall Space Flight Center (MSFC) has an ongoing effort to transfer to industry the technologies developed at MSFC for rocket propulsion systems. The Technology Utilization (TU) Office at MSFC promotes these efforts and accepts requests for assistance from industry. One such solicitation involves a request from North American Marine Jet, Inc. (NAMJ) for assistance in the design of a water-jet-drive system to fill a gap in NAMJ's product line. NAMJ provided MSFC with a baseline axial flow impeller design as well as the relevant working parameters (rpm, flow rate, etc.). This baseline design was analyzed using CFD, and significant deficiencies identified. Four additional analyses were performed involving MSFC changes to the geometric and operational parameters of the baseline case. Subsequently, the impeller was redesigned by NAMJ and analyzed by MSFC. This new configuration performs significantly better than the baseline design. Similar cooperative activities are planned for the design of the jet-drive inlet.
Applying Probabilistic Decision Models to Clinical Trial Design
Smith, Wade P; Phillips, Mark H
2018-01-01
Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075
Development of flat-plate solar collectors for the heating and cooling of buildings
NASA Technical Reports Server (NTRS)
Ramsey, J. W.; Borzoni, J. T.; Holland, T. H.
1975-01-01
The relevant design parameters in the fabrication of a solar collector for heating liquids were examined. The objective was to design, fabricate, and test a low-cost, flat-plate solar collector with high collection efficiency, high durability, and requiring little maintenance. Computer-aided math models of the heat transfer processes in the collector assisted in the design. The preferred physical design parameters were determined from a heat transfer standpoint and the absorber panel configuration, the surface treatment of the absorber panel, the type and thickness of insulation, and the number, spacing and material of the covers were defined. Variations of this configuration were identified, prototypes built, and performance tests performed using a solar simulator. Simulated operation of the baseline collector configuration was combined with insolation data for a number of locations and compared with a predicted load to determine the degree of solar utilization.
NASA Astrophysics Data System (ADS)
Halder, A.; Miller, F. J.
1982-03-01
A probabilistic model to evaluate the risk of liquefaction at a site and to limit or eliminate damage during earthquake induced liquefaction is proposed. The model is extended to consider three dimensional nonhomogeneous soil properties. The parameters relevant to the liquefaction phenomenon are identified, including: (1) soil parameters; (2) parameters required to consider laboratory test and sampling effects; and (3) loading parameters. The fundamentals of risk based design concepts pertient to liquefaction are reviewed. A detailed statistical evaluation of the soil parameters in the proposed liquefaction model is provided and the uncertainty associated with the estimation of in situ relative density is evaluated for both direct and indirect methods. It is found that the liquefaction potential the uncertainties in the load parameters could be higher than those in the resistance parameters.
Probabilistic seismic hazard characterization and design parameters for the Pantex Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernreuter, D. L.; Foxall, W.; Savy, J. B.
1998-10-19
The Hazards Mitigation Center at Lawrence Livermore National Laboratory (LLNL) updated the seismic hazard and design parameters at the Pantex Plant. The probabilistic seismic hazard (PSH) estimates were first updated using the latest available data and knowledge from LLNL (1993, 1998), Frankel et al. (1996), and other relevant recent studies from several consulting companies. Special attention was given to account for the local seismicity and for the system of potentially active faults associated with the Amarillo-Wichita uplift. Aleatory (random) uncertainty was estimated from the available data and the epistemic (knowledge) uncertainty was taken from results of similar studies. Special attentionmore » was given to soil amplification factors for the site. Horizontal Peak Ground Acceleration (PGA) and 5% damped uniform hazard spectra were calculated for six return periods (100 yr., 500 yr., 1000 yr., 2000 yr., 10,000 yr., and 100,000 yr.). The design parameters were calculated following DOE standards (DOE-STD-1022 to 1024). Response spectra for design or evaluation of Performance Category 1 through 4 structures, systems, and components are presented.« less
40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process
Code of Federal Regulations, 2012 CFR
2012-07-01
... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...
40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process
Code of Federal Regulations, 2010 CFR
2010-07-01
... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...
40 CFR Appendix A to Subpart L of... - Criteria for Evaluating a State's Proposed NEPA-Like Process
Code of Federal Regulations, 2013 CFR
2013-07-01
... the SERP will adequately consider: (1) Designation of a study area comparable to the final system; (2... impacts; (4) Present and future conditions; (5) Land use and other social parameters including relevant... (residential, commercial, institutional, and industrial) within the project study area; and (8) Other...
40 CFR Appendix A to Subpart L of... - Criteria for Evaluating a State's Proposed NEPA-Like Process
Code of Federal Regulations, 2014 CFR
2014-07-01
... the SERP will adequately consider: (1) Designation of a study area comparable to the final system; (2... impacts; (4) Present and future conditions; (5) Land use and other social parameters including relevant... (residential, commercial, institutional, and industrial) within the project study area; and (8) Other...
40 CFR Appendix A to Subpart L - Criteria for Evaluating a State's Proposed NEPA-Like Process
Code of Federal Regulations, 2011 CFR
2011-07-01
... consider: (1) Designation of a study area comparable to the final system; (2) A range of feasible... conditions; (5) Land use and other social parameters including relevant recreation and open-space..., institutional, and industrial) within the project study area; and (8) Other anticipated public works projects...
A knowledge-based system with learning for computer communication network design
NASA Technical Reports Server (NTRS)
Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne
1990-01-01
Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.
Free Electron coherent sources: From microwave to X-rays
NASA Astrophysics Data System (ADS)
Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio
2018-04-01
The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Ifanti, Konstantina
2012-12-01
Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
van Oostrom, Conny T.; Jonker, Martijs J.; de Jong, Mark; Dekker, Rob J.; Rauwerda, Han; Ensink, Wim A.; de Vries, Annemieke; Breit, Timo M.
2014-01-01
In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: in-vitro UV-C exposure of mouse embryonic fibroblasts and in-vivo UV-B exposure of mouse skin. Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental ‘sweet spot’ by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the ‘sweet spot’ for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization. PMID:24823911
Pek, Han Bin; Klement, Maximilian; Ang, Kok Siong; Chung, Bevan Kai-Sheng; Ow, Dave Siak-Wei; Lee, Dong-Yup
2015-01-01
Various isoforms of invertases from prokaryotes, fungi, and higher plants has been expressed in Escherichia coli, and codon optimisation is a widely-adopted strategy for improvement of heterologous enzyme expression. Successful synthetic gene design for recombinant protein expression can be done by matching its translational elongation rate against heterologous host organisms via codon optimization. Amongst the various design parameters considered for the gene synthesis, codon context bias has been relatively overlooked compared to individual codon usage which is commonly adopted in most of codon optimization tools. In addition, matching the rates of transcription and translation based on secondary structure may lead to enhanced protein folding. In this study, we evaluated codon context fitness as design criterion for improving the expression of thermostable invertase from Thermotoga maritima in Escherichia coli and explored the relevance of secondary structure regions for folding and expression. We designed three coding sequences by using (1) a commercial vendor optimized gene algorithm, (2) codon context for the whole gene, and (3) codon context based on the secondary structure regions. Then, the codon optimized sequences were transformed and expressed in E. coli. From the resultant enzyme activities and protein yield data, codon context fitness proved to have the highest activity as compared to the wild-type control and other criteria while secondary structure-based strategy is comparable to the control. Codon context bias was shown to be a relevant parameter for enhancing enzyme production in Escherichia coli by codon optimization. Thus, we can effectively design synthetic genes within heterologous host organisms using this criterion. Copyright © 2015 Elsevier Inc. All rights reserved.
Comparison of Different Mo/Au TES Designs for Radiation Detectors
NASA Astrophysics Data System (ADS)
Pobes, Carlos; Fàbrega, Lourdes; Camón, Agustín; Strichovanec, Pavel; Moral-Vico, Javier; Casañ-Pastor, Nieves; Jáudenes, Rosa M.; Sesé, Javier
2018-05-01
We report on the fabrication and characterization of Mo/Au-based transition-edge sensors (TES), intended to be used in X-ray detectors. We have performed complete dark characterization using I-V curves, complex impedance and noise measurements at different bath temperatures and biases. Devices with two designs, different sizes and different membranes have been characterized, some of them with a central bismuth absorber. This has allowed extraction of the relevant parameters of the TES, analyses of their standard behavior and evaluation of their prospects.
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
NASA Astrophysics Data System (ADS)
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
NASA Astrophysics Data System (ADS)
Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu
2018-02-01
Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.
An automated design process for short pulse laser driven opacity experiments
Martin, M. E.; London, R. A.; Goluoglu, S.; ...
2017-12-21
Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less
An automated design process for short pulse laser driven opacity experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M. E.; London, R. A.; Goluoglu, S.
Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyder, L.K.; Fore, C.S.; Vaughan, N.D.
This annotated bibliography of 705 references represents the first in a series to be published by the Ecological Sciences Information Center containing scientific, technical, economic, and regulatory information relevant to nuclear waste isolation. Most references discuss deep geologic disposal, with fewer studies of deep seabed disposal; space disposal is also included. The publication covers both domestic and foreign literature for the period 1954 to 1980. Major chapters selected are Chemical and Physical Aspects; Container Design and Performance; Disposal Site; Envirnmental Transport; General Studies and Reviews; Geology, Hydrology and Site Resources; Regulatory and Economic Aspects; Repository Design and Engineering; Transportation Technology;more » Waste Production; and Waste Treatment. Specialized data fields have been incorporated to improve the ease and accuracy of locating pertinent references. Specific radionuclides for which data are presented are listed in the Measured Radionuclides field, and specific parameters which affect the migration of these radionuclides are presented in the Measured Parameters field. The references within each chapter are arranged alphabetically by leading author, corporate affiliation, or title of the document. When the author is not given, the corporate affiliation appears first. If these two levels of authorship are not given, the title of the document is used as the identifying level. Indexes are provided for author(s), keywords, subject category, title, geographic location, measured parameters, measured radionuclides, and publication description.« less
Optimal experimental design for parameter estimation of a cell signaling model.
Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias
2009-11-01
Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.
NASA Technical Reports Server (NTRS)
Korram, S.
1977-01-01
The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
Variation of semen parameters in healthy medical students due to exam stress.
Lampiao, Fanuel
2009-12-01
This study was aimed at investigating semen parameters that vary most in samples of healthy donors undergoing stressful examination period. Samples were left to liquefy in an incubator at 37 degrees C, 5% CO2 for 30 minutes before volume was measured. Concentration and motility parameters were measured by means of computer assisted semen analysis (CASA) using Sperm Class Analyzer (Microptic S.L, Madrid, Spain). Sperm concentration was significantly decreased in samples donated close to the exam period as well as samples donated during the exam period when compared to samples donated at the beginning of the semester. Stress levels of donors might prove to be clinically relevant and important when designing experiment protocols.
Present understanding of MHD and heat transfer phenomena for liquid metal blankets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirillov, I.R.; Barleon, L.; Reed, C.B.
1994-12-31
Liquid metals (Li, Li17Pb83, Pb) are considered as coolants in many designs of fusion reactor blankets. To estimate their potential and to make an optimal design, one has to know the magnetohydrodynamic (MHD) and heat transfer characteristics of liquid metal flow in the magnetic field. Such flows with high characteristic parameter values (Hartmann number M and interaction parameter N) open up a relatively new field in Magnetohydrodynamics requiring both theoretical and experimental efforts. A review of experimental work done for the last ten years in different countries shows that there are some data on MHD/HT characteristics in straight channels ofmore » simple geometry under fusion reactor relevant conditions (M>>1, N>>1) and not enough data for complex flow geometries. Future efforts should be directed to investigation of MHD/HT in straight channels with perfect and imperfect electroinsulated walls, including those with controlled imperfections, and in channels of complex geometry. The experiments are not simple, since the fusion relevant conditions require facilities with magnetic fields at, or even higher than, 5-7 T in comparatively large volumes. International cooperation in constructing and operating these facilities may be of great help.« less
Shearer, Gene; Clerici, Mario
2010-11-01
Multiple and frequent exposure to the human immunodeficiency virus (HIV) does not necessarily result in HIV infection. Approximately 15% of HIV exposed seronegative individuals repeatedly resist infection, a phenomenon that has been observed in all investigated HIV‐exposed cohorts. This brief report provides a limited historic perspective of the discovery of these cohorts and outlines some of the immunologic and genetic parameters that are associated with resistance. We raise the possibility that assessing immunologic parameters of the phenomenon might provide insights that might be relevant for effective AIDS vaccine design.
Calibrating the coordination chemistry tool chest: metrics of bi- and tridentate ligands.
Aguilà, David; Escribano, Esther; Speed, Saskia; Talancón, Daniel; Yermán, Luis; Alvarez, Santiago
2009-09-07
Bi- and multidentate ligands form part of the tools commonly used for designing coordination and supramolecular complexes with desired stereochemistries. Parameters and concepts usually employed include the normalized bite of bidentate ligands, their cis- or trans-coordinating ability, their rigidity or flexibility, or the duality of some ligands that can act in chelating or dinucleating modes. In this contribution we present a structural database study of over one hundred bi- and tridentate ligands that allows us to parametrize their coordinating properties and discuss the relevance of such parameters for the choice of coordination polyhedron or coordination sites.
Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils
NASA Technical Reports Server (NTRS)
Lee, Byung Joon; Liou, Meng-Sing
2012-01-01
This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.
Measuring the iron spectral opacity in solar conditions using a double ablation front scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colaitis, A.; Ducret, J. E.; Turck-Chieze, S
We propose a new method to achieve hydrodynamic conditions relevant for the investigation of the radiation transport properties of the plasma at the base of the solar convection zone. The method is designed in the framework of opacity measurements with high-power lasers and exploits the temporal and spatial stability of hydrodynamic parameters in counter-propagating Double Ablation Front (DAF) structures.
Optimum dimensions of power solenoids for magnetic suspension
NASA Technical Reports Server (NTRS)
Kaznacheyev, B. A.
1985-01-01
Design optimization of power solenoids for controllable and stabilizable magnetic suspensions with force compensation in a wind tunnel is shown. It is assumed that the model of a levitating body is a sphere of ferromagnetic material with constant magnetic permeability. This sphere, with a radius much smaller than its distance from the solenoid above, is to be maintained in position on the solenoid axis by balance of the vertical electromagnetic force and the force of gravitation. The necessary vertical (axial) force generated by the solenoid is expressed as a function of relevant system dimensions, solenoid design parameters, and physical properties of the body. Three families of curves are obtained which depict the solenoid power for a given force as a function of the solenoid length with either outside radius or inside radius as a variable parameter and as a function of the outside radius with inside radius as a variable parameter. The curves indicate the optimum solenoid length and outside radius, for minimum power, corresponding to a given outside radius and inside radius, respectively.
The Design of PSB-VVER Experiments Relevant to Accident Management
NASA Astrophysics Data System (ADS)
Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander
Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.
Application of TRIZ Methodology in Diffusion Welding System Optimization
NASA Astrophysics Data System (ADS)
Ravinder Reddy, N.; Satyanarayana, V. V.; Prashanthi, M.; Suguna, N.
2017-12-01
Welding is tremendously used in metal joining processes in the manufacturing process. In recent years, diffusion welding method has significantly increased the quality of a weld. Nevertheless, diffusion welding has some extent short research and application progress. Therefore, diffusion welding has a lack of relevant information, concerned with the joining of thick and thin materials with or without interlayers, on welding design such as fixture, parameters selection and integrated design. This article intends to combine innovative methods in the application of diffusion welding design. This will help to decrease trial and error or failure risks in the welding process being guided by the theory of inventive problem solving (TRIZ) design method. This article hopes to provide welding design personnel with innovative design ideas under research and for practical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlenwein, P.; Frisch, W.; Kafka, P.
Nuclear reactors of 200- to 400-MW(thermal) power for district heating are the subject of increasing interest, and several specific designs are under discussion today. In the Federal Republic of Germany (FRG), the Kraftwerk Union AG has presented a 200-MW(thermal) heating reactor concept. The main safety issues of this design are assessed. In this design, the primary system is fully integrated into the reactor pressure vessel (RPV), which is tightly enclosed by the containment. The low process parameters like pressure, temperature, and power density and the high ratio of coolant volume to thermal power allow the design of simple safety features.more » This is supported by the preference of passive over active components. A special feature is a newly designed hydraulic control and rod drive mechanism, which is also integrated into the RPV. Within the safety assessment an overview of the relevant FRG safety rules and guidelines, developed mainly for large, electricity-generating power plants, is given. Included is a discussion of the extent to which these licensing rules can be applied to the concept of heating reactors.« less
Medicinal Chemical Properties of Successful Central Nervous System Drugs
Pajouhesh, Hassan; Lenz, George R.
2005-01-01
Summary: Fundamental physiochemical features of CNS drugs are related to their ability to penetrate the blood-brain barrier affinity and exhibit CNS activity. Factors relevant to the success of CNS drugs are reviewed. CNS drugs show values of molecular weight, lipophilicity, and hydrogen bond donor and acceptor that in general have a smaller range than general therapeutics. Pharmacokinetic properties can be manipulated by the medicinal chemist to a significant extent. The solubility, permeability, metabolic stability, protein binding, and human ether-ago-go-related gene inhibition of CNS compounds need to be optimized simultaneously with potency, selectivity, and other biological parameters. The balance between optimizing the physiochemical and pharmacokinetic properties to make the best compromises in properties is critical for designing new drugs likely to penetrate the blood brain barrier and affect relevant biological systems. This review is intended as a guide to designing CNS therapeutic agents with better drug-like properties. PMID:16489364
Predictive design and interpretation of colliding pulse injected laser wakefield experiments
NASA Astrophysics Data System (ADS)
Cormier-Michel, Estelle; Ranjbar, Vahid H.; Cowan, Ben M.; Bruhwiler, David L.; Geddes, Cameron G. R.; Chen, Min; Ribera, Benjamin; Esarey, Eric; Schroeder, Carl B.; Leemans, Wim P.
2010-11-01
The use of colliding laser pulses to control the injection of plasma electrons into the plasma wake of a laser plasma accelerator is a promising approach to obtaining stable, tunable electron bunches with reduced emittance and energy spread. Colliding Pulse Injection (CPI) experiments are being performed by groups around the world. We will present recent particle-in-cell simulations, using the parallel VORPAL framework, of CPI for physical parameters relevant to ongoing experiments of the LOASIS program at LBNL. We evaluate the effect of laser and plasma tuning, on the trapped electron bunch and perform parameter scans in order to optimize the quality of the bunch. Impact of non-ideal effects such as imperfect laser modes and laser self focusing are also evaluated. Simulation data are validated against current experimental results, and are used to design future experiments.
Set membership experimental design for biological systems.
Marvel, Skylar W; Williams, Cranos M
2012-03-21
Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240
Villas-Boas, Mariana D; Olivera, Francisco; de Azevedo, Jose Paulo S
2017-09-01
Water quality monitoring is a complex issue that requires support tools in order to provide information for water resource management. Budget constraints as well as an inadequate water quality network design call for the development of evaluation tools to provide efficient water quality monitoring. For this purpose, a nonlinear principal component analysis (NLPCA) based on an autoassociative neural network was performed to assess the redundancy of the parameters and monitoring locations of the water quality network in the Piabanha River watershed. Oftentimes, a small number of variables contain the most relevant information, while the others add little or no interpretation to the variability of water quality. Principal component analysis (PCA) is widely used for this purpose. However, conventional PCA is not able to capture the nonlinearities of water quality data, while neural networks can represent those nonlinear relationships. The results presented in this work demonstrate that NLPCA performs better than PCA in the reconstruction of the water quality data of Piabanha watershed, explaining most of data variance. From the results of NLPCA, the most relevant water quality parameter is fecal coliforms (FCs) and the least relevant is chemical oxygen demand (COD). Regarding the monitoring locations, the most relevant is Poço Tarzan (PT) and the least is Parque Petrópolis (PP).
Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur
2016-12-15
International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.
Band gaps in grid structure with periodic local resonator subsystems
NASA Astrophysics Data System (ADS)
Zhou, Xiaoqin; Wang, Jun; Wang, Rongqi; Lin, Jieqiong
2017-09-01
The grid structure is widely used in architectural and mechanical field for its high strength and saving material. This paper will present a study on an acoustic metamaterial beam (AMB) based on the normal square grid structure with local resonators owning both flexible band gaps and high static stiffness, which have high application potential in vibration control. Firstly, the AMB with variable cross-section frame is analytically modeled by the beam-spring-mass model that is provided by using the extended Hamilton’s principle and Bloch’s theorem. The above model is used for computing the dispersion relation of the designed AMB in terms of the design parameters, and the influences of relevant parameters on band gaps are discussed. Then a two-dimensional finite element model of the AMB is built and analyzed in COMSOL Multiphysics, both the dispersion properties of unit cell and the wave attenuation in a finite AMB have fine agreement with the derived model. The effects of design parameters of the two-dimensional model in band gaps are further examined, and the obtained results can well verify the analytical model. Finally, the wave attenuation performances in three-dimensional AMBs with equal and unequal thickness are presented and discussed.
Evaluation of an Integrated Read-Out Layer Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Ajamieh, Fayez
2011-07-01
This thesis presents evaluation results of an Integrated Read-out Layer (IRL), a proposed concept in scintillator-based calorimetry intended to meet the exceptional calorimetric requirements of the envisaged International Linear Collider (ILC). This study presents a full characterization of the prototype IRL, including exploration of relevant parameters, calibration performance, and the uniformity of response. The study represents proof of the IRL concept. Finally, proposed design enhancements are presented.
Evaluation of an Integrated Read-Out Layer Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Ajamieh, Fayez; /NIU
2011-08-18
This thesis presents evaluation results of an Integrated Read-out Layer (IRL), a proposed concept in scintillator-based calorimetry intended to meet the exceptional calorimetric requirements of the envisaged International Linear Collider (ILC). This study presents a full characterization of the prototype IRL, including exploration of relevant parameters, calibration performance, and the uniformity of response. The study represents proof of the IRL concept. Finally, proposed design enhancements are presented.
How can we tackle energy efficiency in IoT based smart buildings?
Moreno, M Victoria; Úbeda, Benito; Skarmeta, Antonio F; Zamora, Miguel A
2014-05-30
Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario.
A study on the role of powertrain system dynamics on vehicle driveability
NASA Astrophysics Data System (ADS)
Castellazzi, Luca; Tonoli, Andrea; Amati, Nicola; Galliera, Enrico
2017-07-01
Vehicle driveability describes the complex interactions between the driver and the vehicle, mainly related to longitudinal vibrations. Today, a relevant part of the driveability process optimisation is realised by means of track tests, which require a considerable effort due to the number of parameters (such as stiffness and damping components) affecting this behaviour. The drawback of this approach is that it is carried on at a stage when a design iteration becomes very expensive in terms of time and cost. The objective of this work is to propose a light and accurate tool to represent the relevant quantities involved in the driveability analysis, and to understand which are the main vehicle parameters that influence the torsional vibrations transmitted to the driver. Particular attention is devoted to the role of the tyre, the engine mount, the dual mass flywheel and their possible interactions. The presented nonlinear dynamic model has been validated in time and frequency domain and, through linearisation of its nonlinear components, allows to exploit modal and energy analysis. Objective indexes regarding the driving comfort are additionally considered in order to evaluate possible driveability improvements related to the sensitivity of powertrain parameters.
How can We Tackle Energy Efficiency in IoT Based Smart Buildings?
Moreno, M. Victoria; Úbeda, Benito; Skarmeta, Antonio F.; Zamora, Miguel A.
2014-01-01
Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario. PMID:24887040
Wavelets for sign language translation
NASA Astrophysics Data System (ADS)
Wilson, Beth J.; Anspach, Gretel
1993-10-01
Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.
Task-oriented display design - Concept and example
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
1989-01-01
The general topic was in the area of display design alternatives for improved man-machine performance. The intent was to define and assess a display design concept oriented toward providing this task-oriented information. The major focus of this concept deals with the processing of data into parameters that are more relevant to the task of the human operator. Closely coupled to this concept of relevant information is the form or manner in which this information is actually presented. Conventional forms of presentation are normally a direct representation of the underlying data. By providing information in a form that is more easily assimilated and understood, a reduction in human error and cognitive workload may be obtained. A description of this proposed concept with a design example is provided. The application for the example was an engine display for a generic, twin-engine civil transport aircraft. The product of this concept was evaluated against a functionally similar, traditional display. The results of this evaluation showed that a task-oriented approach to design is a viable concept with regard to reducing user error and cognitive workload. The goal of this design process, providing task-oriented information to the user, both in content and form, appears to be a feasible mechanism for increasing the overall performance of a man-machine system.
Task-oriented display design: Concept and example
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
1989-01-01
The general topic was in the area of display design alternatives for improved man-machine performance. The intent was to define and assess a display design concept oriented toward providing this task-oriented information. The major focus of this concept deals with the processing of data into parameters that are more relevant to the task of the human operator. Closely coupled to this concept of relevant information is the form or manner in which this information is actually presented. Conventional forms of presentation are normally a direct representation of the underlying data. By providing information in a form that is more easily assimilated and understood, a reduction in human error and cognitive workload may be obtained. A description of this proposed concept with a design example is provided. The application for the example was an engine display for a generic, twin-engine civil transport aircraft. The product of this concept was evaluated against a functionally similar, traditional display. The results of this evaluation showed that a task-oriented approach to design is a viable concept with regard to reducing user error and cognitive workload. The goal of this design process, providing task-oriented information to the user, both in content and form, appears to be a feasible mechanism for increasing the overall performance of a man-machine system.
Landau-type expansion for the energy landscape of the designed heteropolymer
NASA Astrophysics Data System (ADS)
Grosberg, Alexander; Pande, Vijay; Tanaka, Toyoichi
1997-03-01
The concept of evolutional optimization of heteropolymer sequences is used to construct the phenomenological theory describing folding/unfoolding kinetics of the polymers with designed sequences. The relevant energy landscape is described in terms of Landau expansion over the powers of the overlap parameter of the current and the native conformations. It is shown that only linear term is sequence (mutation) dependent, the rest being determined by the underlying conformational geometry. The theory os free of the assumptions of the uncorrelated energy landscape type. We demonstrate the power of the theory by comparing data to the simulations and experiments.
NASA Astrophysics Data System (ADS)
Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.
2015-07-01
The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.
Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon
2014-01-01
Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.
Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan
2017-12-01
This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.
Driving Performance Under Alcohol in Simulated Representative Driving Tasks
Kenntner-Mabiala, Ramona; Kaussner, Yvonne; Jagiellowicz-Kaufmann, Monika; Hoffmann, Sonja; Krüger, Hans-Peter
2015-01-01
Abstract Comparing drug-induced driving impairments with the effects of benchmark blood alcohol concentrations (BACs) is an approved approach to determine the clinical relevance of findings for traffic safety. The present study aimed to collect alcohol calibration data to validate findings of clinical trials that were derived from a representative test course in a dynamic driving simulator. The driving performance of 24 healthy volunteers under placebo and with 0.05% and 0.08% BACs was measured in a double-blind, randomized, crossover design. Trained investigators assessed the subjects’ driving performance and registered their driving errors. Various driving parameters that were recorded during the simulation were also analyzed. Generally, the participants performed worse on the test course (P < 0.05 for the investigators’ assessment) under the influence of alcohol. Consistent with the relevant literature, lane-keeping performance parameters were sensitive to the investigated BACs. There were significant differences between the alcohol and placebo conditions in most of the parameters analyzed. However, the total number of errors was the only parameter discriminating significantly between all three BAC conditions. In conclusion, data show that the present experimental setup is suitable for future psychopharmacological research. Thereby, for each drug to be investigated, we recommend to assess a profile of various parameters that address different levels of driving. On the basis of this performance profile, the total number of driving errors is recommended as the primary endpoint. However, this overall endpoint should be completed by a specifically sensitive parameter that is chosen depending on the effect known to be induced by the tested drug. PMID:25689289
NASA Astrophysics Data System (ADS)
Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.
Modelling and identification for control of gas bearings
NASA Astrophysics Data System (ADS)
Theisen, Lukas R. S.; Niemann, Hans H.; Santos, Ilmar F.; Galeazzi, Roberto; Blanke, Mogens
2016-03-01
Gas bearings are popular for their high speed capabilities, low friction and clean operation, but suffer from poor damping, which poses challenges for safe operation in presence of disturbances. Feedback control can achieve enhanced damping but requires low complexity models of the dominant dynamics over its entire operating range. Models from first principles are complex and sensitive to parameter uncertainty. This paper presents an experimental technique for "in situ" identification of a low complexity model of a rotor-bearing-actuator system and demonstrates identification over relevant ranges of rotational speed and gas injection pressure. This is obtained using parameter-varying linear models that are found to capture the dominant dynamics. The approach is shown to be easily applied and to suit subsequent control design. Based on the identified models, decentralised proportional control is designed and shown to obtain the required damping in theory and in a laboratory test rig.
Toward rational design of electrical stimulation strategies for epilepsy control
Sunderam, Sridhar; Gluckman, Bruce; Reato, Davide; Bikson, Marom
2009-01-01
Electrical stimulation is emerging as a viable alternative for epilepsy patients whose seizures are not alleviated by drugs or surgery. Its attractions are temporal and spatial specificity of action, flexibility of waveform parameters and timing, and the perception that its effects are reversible unlike resective surgery. However, despite significant advances in our understanding of mechanisms of neural electrical stimulation, clinical electrotherapy for seizures relies heavily on empirical tuning of parameters and protocols. We highlight concurrent treatment goals with potentially conflicting design constraints that must be resolved when formulating rational strategies for epilepsy electrotherapy: namely seizure reduction versus cognitive impairment, stimulation efficacy versus tissue safety, and mechanistic insight versus clinical pragmatism. First, treatment markers, objectives, and metrics relevant to electrical stimulation for epilepsy are discussed from a clinical perspective. Then the experimental perspective is presented, with the biophysical mechanisms and modalities of open-loop electrical stimulation, and the potential benefits of closed-loop control for epilepsy. PMID:19926525
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, G.B.; Kiraly, R.J.; Nose, Y.
The objective of the study is to define the human thorax in a quantitative statistical manner such that the information will be useful to the designers of cardiac prostheses, both total replacement and assist devices. This report pertains specifically to anatomical parameters relevant to the total cardiac prosthesis. This information will also be clinically useful in that the proposed recipient of a cardiac prosthesis can by simple radiography be assured of an adequate fit with the prosthesis prior to the implantation.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Clustervision: Visual Supervision of Unsupervised Clustering.
Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam
2018-01-01
Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.
A two-parameter design storm for Mediterranean convective rainfall
NASA Astrophysics Data System (ADS)
García-Bartual, Rafael; Andrés-Doménech, Ignacio
2017-05-01
The following research explores the feasibility of building effective design storms for extreme hydrological regimes, such as the one which characterizes the rainfall regime of the east and south-east of the Iberian Peninsula, without employing intensity-duration-frequency (IDF) curves as a starting point. Nowadays, after decades of functioning hydrological automatic networks, there is an abundance of high-resolution rainfall data with a reasonable statistic representation, which enable the direct research of temporal patterns and inner structures of rainfall events at a given geographic location, with the aim of establishing a statistical synthesis directly based on those observed patterns. The authors propose a temporal design storm defined in analytical terms, through a two-parameter gamma-type function. The two parameters are directly estimated from 73 independent storms identified from rainfall records of high temporal resolution in Valencia (Spain). All the relevant analytical properties derived from that function are developed in order to use this storm in real applications. In particular, in order to assign a probability to the design storm (return period), an auxiliary variable combining maximum intensity and total cumulated rainfall is introduced. As a result, for a given return period, a set of three storms with different duration, depth and peak intensity are defined. The consistency of the results is verified by means of comparison with the classic method of alternating blocks based on an IDF curve, for the above mentioned study case.
NASA Astrophysics Data System (ADS)
Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.
2016-03-01
Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin; Fox, Peter (Editor); Norack, Tom (Editor)
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discovery tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out non-relevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Designing electronic properties of two-dimensional crystals through optimization of deformations
NASA Astrophysics Data System (ADS)
Jones, Gareth W.; Pereira, Vitor M.
2014-09-01
One of the enticing features common to most of the two-dimensional (2D) electronic systems that, in the wake of (and in parallel with) graphene, are currently at the forefront of materials science research is the ability to easily introduce a combination of planar deformations and bending in the system. Since the electronic properties are ultimately determined by the details of atomic orbital overlap, such mechanical manipulations translate into modified (or, at least, perturbed) electronic properties. Here, we present a general-purpose optimization framework for tailoring physical properties of 2D electronic systems by manipulating the state of local strain, allowing a one-step route from their design to experimental implementation. A definite example, chosen for its relevance in light of current experiments in graphene nanostructures, is the optimization of the experimental parameters that generate a prescribed spatial profile of pseudomagnetic fields (PMFs) in graphene. But the method is general enough to accommodate a multitude of possible experimental parameters and conditions whereby deformations can be imparted to the graphene lattice, and complies, by design, with graphene's elastic equilibrium and elastic compatibility constraints. As a result, it efficiently answers the inverse problem of determining the optimal values of a set of external or control parameters (such as substrate topography, sample shape, load distribution, etc) that result in a graphene deformation whose associated PMF profile best matches a prescribed target. The ability to address this inverse problem in an expedited way is one key step for practical implementations of the concept of 2D systems with electronic properties strain-engineered to order. The general-purpose nature of this calculation strategy means that it can be easily applied to the optimization of other relevant physical quantities which directly depend on the local strain field, not just in graphene but in other 2D electronic membranes.
Controlling molecular transport in minimal emulsions
NASA Astrophysics Data System (ADS)
Gruner, Philipp; Riechers, Birte; Semin, Benoît; Lim, Jiseok; Johnston, Abigail; Short, Kathleen; Baret, Jean-Christophe
2016-01-01
Emulsions are metastable dispersions in which molecular transport is a major mechanism driving the system towards its state of minimal energy. Determining the underlying mechanisms of molecular transport between droplets is challenging due to the complexity of a typical emulsion system. Here we introduce the concept of `minimal emulsions', which are controlled emulsions produced using microfluidic tools, simplifying an emulsion down to its minimal set of relevant parameters. We use these minimal emulsions to unravel the fundamentals of transport of small organic molecules in water-in-fluorinated-oil emulsions, a system of great interest for biotechnological applications. Our results are of practical relevance to guarantee a sustainable compartmentalization of compounds in droplet microreactors and to design new strategies for the dynamic control of droplet compositions.
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
Designing a deep brain stimulator to suppress pathological neuronal synchrony.
Montaseri, Ghazal; Yazdanpanah, Mohammad Javad; Bahrami, Fariba
2015-03-01
Some of neuropathologies are believed to be related to abnormal synchronization of neurons. In the line of therapy, designing effective deep brain stimulators to suppress the pathological synchrony among neuronal ensembles is a challenge of high clinical relevance. The stimulation should be able to disrupt the synchrony in the presence of latencies due to imperfect knowledge about parameters of a neuronal ensemble and stimulation impacts on the ensemble. We propose an adaptive desynchronizing deep brain stimulator capable of dealing with these uncertainties. We analyze the collective behavior of the stimulated neuronal ensemble and show that, using the designed stimulator, the resulting asynchronous state is stable. Simulation results reveal the efficiency of the proposed technique. Copyright © 2014 Elsevier Ltd. All rights reserved.
Eldredge, Jonathan D
2003-06-01
to describe the essential components of the Randomised Controlled Trial (RCT) and its major variations; to describe less conventional applications of the RCT design found in the health sciences literature with potential relevance to health sciences librarianship; to discuss the limited number of RCTs within health sciences librarianship. narrative review supported to a limited extent with PubMed and Library Literature database searches consistent with specific search parameters. In addition, more systematic methods, including handsearching of specific journals, to identify health sciences librarianship RCTs. While many RCTs within the health sciences follow more conventional patterns, some RCTs assume certain unique features. Selected examples illustrate the adaptations of this experimental design to answering questions of possible relevance to health sciences librarians. The author offers several strategies for controlling bias in library and informatics applications of the RCT and acknowledges the potential of the electronic era in providing many opportunities to utilize the blinding aspects of RCTs. RCTs within health sciences librarianship inhabit a limited number of subject domains such as education. This limited scope offers both advantages and disadvantages for making Evidence-Based Librarianship (EBL) a reality. The RCT design offers the potential to answer far more EBL questions than have been addressed by the design to date. Librarians need only extend their horizons through use of the versatile RCT design into new subject domains to facilitate making EBL a reality.
NASA Technical Reports Server (NTRS)
Zapata, R. N.; Humphris, R. R.; Henderson, K. C.
1975-01-01
The basic research and development work towards proving the feasibility of operating an all-superconductor magnetic suspension and balance device for aerodynamic testing is presented. The feasibility of applying a quasi-six-degree-of freedom free support technique to dynamic stability research was studied along with the design concepts and parameters for applying magnetic suspension techniques to large-scale aerodynamic facilities. A prototype aerodynamic test facility was implemented. Relevant aspects of the development of the prototype facility are described in three sections: (1) design characteristics; (2) operational characteristics; and (3) scaling to larger facilities.
Manufacturing of Wearable Sensors for Human Health and Performance Monitoring
NASA Astrophysics Data System (ADS)
Alizadeh, Azar
2015-03-01
Continuous monitoring of physiological and biological parameters is expected to improve performance and medical outcomes by assessing overall health status and alerting for life-saving interventions. Continuous monitoring of these parameters requires wearable devices with an appropriate form factor (lightweight, comfortable, low energy consuming and even single-use) to avoid disrupting daily activities thus ensuring operation relevance and user acceptance. Many previous efforts to implement remote and wearable sensors have suffered from high cost and poor performance, as well as low clinical and end-use acceptance. New manufacturing and system level design approaches are needed to make the performance and clinical benefits of these sensors possible while satisfying challenging economic, regulatory, clinical, and user-acceptance criteria. In this talk we will review several recent design and manufacturing efforts aimed at designing and building prototype wearable sensors. We will discuss unique opportunities and challenges provided by additive manufacturing, including 3D printing, to drive innovation through new designs, faster prototyping and manufacturing, distributed networks, and new ecosystems. We will also show alternative hybrid self-assembly based integration techniques for low cost large scale manufacturing of single use wearable devices. Coauthors: Prabhjot Singh and Jeffrey Ashe.
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Gan, Chee Kwan
2017-07-01
Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.
Tsamandouras, Nikolaos; Rostami-Hodjegan, Amin; Aarons, Leon
2015-01-01
Pharmacokinetic models range from being entirely exploratory and empirical, to semi-mechanistic and ultimately complex physiologically based pharmacokinetic (PBPK) models. This choice is conditional on the modelling purpose as well as the amount and quality of the available data. The main advantage of PBPK models is that they can be used to extrapolate outside the studied population and experimental conditions. The trade-off for this advantage is a complex system of differential equations with a considerable number of model parameters. When these parameters cannot be informed from in vitro or in silico experiments they are usually optimized with respect to observed clinical data. Parameter estimation in complex models is a challenging task associated with many methodological issues which are discussed here with specific recommendations. Concepts such as structural and practical identifiability are described with regards to PBPK modelling and the value of experimental design and sensitivity analyses is sketched out. Parameter estimation approaches are discussed, while we also highlight the importance of not neglecting the covariance structure between model parameters and the uncertainty and population variability that is associated with them. Finally the possibility of using model order reduction techniques and minimal semi-mechanistic models that retain the physiological-mechanistic nature only in the parts of the model which are relevant to the desired modelling purpose is emphasized. Careful attention to all the above issues allows us to integrate successfully information from in vitro or in silico experiments together with information deriving from observed clinical data and develop mechanistically sound models with clinical relevance. PMID:24033787
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat
NASA Technical Reports Server (NTRS)
Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Josh; Chodas, Paul W.; Englander, Jacob A.
2017-01-01
This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https:cneos.jpl.nasa.govpdcspdc17. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or low-thrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft, asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth-spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalaria, P. C., E-mail: parth.kalaria@partner.kit.edu; Avramidis, K. A.; Franck, J.
High frequency (>230 GHz) megawatt-class gyrotrons are planned as RF sources for electron cyclotron resonance heating and current drive in DEMOnstration fusion power plants (DEMOs). In this paper, for the first time, a feasibility study of a 236 GHz DEMO gyrotron is presented by considering all relevant design goals and the possible technical limitations. A mode-selection procedure is proposed in order to satisfy the multi-frequency and frequency-step tunability requirements. An effective systematic design approach for the optimal design of a gradually tapered cavity is presented. The RF-behavior of the proposed cavity is verified rigorously, supporting 920 kW of stable output power withmore » an interaction efficiency of 36% including the considerations of realistic beam parameters.« less
Crosstalk cancellation on linearly and circularly polarized communications satellite links
NASA Technical Reports Server (NTRS)
Overstreet, W. P.; Bostian, C. W.
1979-01-01
The paper discusses the cancellation network approach for reducing crosstalk caused by depolarization on a dual-polarized communications satellite link. If the characteristics of rain depolarization are sufficiently well known, the cancellation network can be designed in a way that reduces system complexity, the most important parameter being the phase of the cross-polarized signal. Relevant theoretical calculations and experimental data are presented. The simplicity of the cancellation system proposed makes it ideal for use with small domestic or private earth terminals.
Martinez-Martin, Pablo; Deuschl, Günther
2007-04-30
Motor-related parameters are the standard outcome parameters for treatment interventions. Nonetheless, subjective appraisals about the consequences of treatment on health-related quality of life (HRQoL) are meanwhile established and may uncover important aspects of interventions. We have reviewed the literature with a defined search strategy and collected 61 clinical trials, which have used HRQoL as a planned outcome parameter. The articles were rated similarly as for the Task Force report of the Movement Disorder Society on interventions for Parkinson's disease (PD), but the relevant outcome parameter was HRQoL. We found that unilateral pallidotomy, deep brain stimulation of the subthalamic nucleus, and rasagiline are efficacious to improve the HRQoL of PD patients. For many other interventions, the efficacy to improve HRQoL in the PD setting cannot be considered to be proven so far. HRQoL should be part of future trial designs and more research is necessary to understand the determinants of QoL in PD.
NASA Astrophysics Data System (ADS)
Tosi, Luis Phillipe; Colonius, Tim; Lee, Hyeong Jae; Sherrit, Stewart; Jet Propulsion Laboratory Collaboration; California Institute of Technology Collaboration
2016-11-01
Aeroelastic flutter arises when the motion of a structure and its surrounding flowing fluid are coupled in a constructive manner, causing large amplitudes of vibration in the immersed solid. A cantilevered beam in axial flow within a nozzle-diffuser geometry exhibits interesting resonance behavior that presents good prospects for internal flow energy harvesting. Different modes can be excited as a function of throat velocity, nozzle geometry, fluid and cantilever material parameters. Similar behavior has been also observed in elastically mounted rigid plates, enabling new designs for such devices. This work explores the relationship between the aeroelastic flutter instability boundaries and relevant non-dimensional parameters via experiments, numerical, and stability analyses. Parameters explored consist of a non-dimensional stiffness, a non-dimensional mass, non-dimensional throat size, and Reynolds number. A map of the system response in this parameter space may serve as a guide to future work concerning possible electrical output and failure prediction in harvesting devices.
Sinharay, Arijit; Rakshit, Raj; Chakravarty, Tapas; Ghosh, Deb; Pal, Arpan
2017-01-01
Pulmonary ailments are conventionally diagnosed by spirometry. The complex forceful breathing maneuver as well as the extreme cost of spirometry renders it unsuitable in many situations. This work is aimed to facilitate an emerging direction of tidal breathing-based pulmonary evaluation by designing a novel, equitable, precise and portable device for acquisition and analysis of directional tidal breathing patterns, in real time. The proposed system primarily uses an in-house designed blow pipe, 40-kHz air-coupled ultrasound transreceivers, and a radio frequency (RF) phase-gain integrated circuit (IC). Moreover, in order to achieve high sensitivity in a cost-effective design philosophy, we have exploited the phase measurement technique, instead of selecting the contemporary time-of-flight (TOF) measurement; since application of the TOF principle in tidal breathing assessments requires sub-micro to nanosecond time resolution. This approach, which depends on accurate phase measurement, contributed to enhanced sensitivity using a simple electronics design. The developed system has been calibrated using a standard 3-L calibration syringe. The parameters of this system are validated against a standard spirometer, with maximum percentage error below 16%. Further, the extracted respiratory parameters related to tidal breathing have been found to be comparable with relevant prior works. The error in detecting respiration rate only is 3.9% compared to manual evaluation. These encouraging insights reveal the definite potential of our tidal breathing pattern (TBP) prototype for measuring tidal breathing parameters in order to extend the reach of affordable healthcare in rural regions and developing areas. PMID:28800103
Kuo, Chung-Feng Jeffrey; Wang, Hsing-Won; Hsiao, Shang-Wun; Peng, Kai-Ching; Chou, Ying-Liang; Lai, Chun-Yu; Hsu, Chien-Tung Max
2014-01-01
Physicians clinically use laryngeal video stroboscope as an auxiliary instrument to test glottal diseases, and read vocal fold images and voice quality for diagnosis. As the position of vocal fold varies in each person, the proportion of the vocal fold size as presented in the vocal fold image is different, making it impossible to directly estimate relevant glottis physiological parameters, such as the length, area, perimeter, and opening angle of the glottis. Hence, this study designs an innovative laser projection marking module for the laryngeal video stroboscope to provide reference parameters for image scaling conversion. This innovative laser projection marking module to be installed on the laryngeal video stroboscope using laser beams to project onto the glottis plane, in order to provide reference parameters for scaling conversion of images of laryngeal video stroboscope. Copyright © 2013 Elsevier Ltd. All rights reserved.
AST Combustion Workshop: Diagnostics Working Group Report
NASA Technical Reports Server (NTRS)
Locke, Randy J.; Hicks, Yolanda R.; Hanson, Ronald K.
1996-01-01
A workshop was convened under NASA's Advanced Subsonics Technologies (AST) Program. Many of the principal combustion diagnosticians from industry, academia, and government laboratories were assembled in the Diagnostics/Testing Subsection of this workshop to discuss the requirements and obstacles to the successful implementation of advanced diagnostic techniques to the test environment of the proposed AST combustor. The participants, who represented the major relevant areas of advanced diagnostic methods currently applied to combustion and related fields, first established the anticipated AST combustor flowfield conditions. Critical flow parameters were then examined and prioritized as to their importance to combustor/fuel injector design and manufacture, environmental concerns, and computational interests. Diagnostic techniques were then evaluated in terms of current status, merits and obstacles for each flow parameter. All evaluations are presented in tabular form and recommendations are made on the best-suited diagnostic method to implement for each flow parameter in order of applicability and intrinsic value.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Davila, Marco L.; Brentjens, Renier; Wang, Xiuyan; Rivière, Isabelle; Sadelain, Michel
2012-01-01
Second-generation chimeric antigen receptors (CARs) are powerful tools to redirect antigen-specific T cells independently of HLA-restriction. Recent clinical studies evaluating CD19-targeted T cells in patients with B-cell malignancies demonstrate the potency of CAR-engineered T cells. With results from 28 subjects enrolled by five centers conducting studies in patients with chronic lymphocytic leukemia (CLL) or lymphoma, some insights into the parameters that determine T-cell function and clinical outcome of CAR-based approaches are emerging. These parameters involve CAR design, T-cell production methods, conditioning chemotherapy as well as patient selection. Here, we discuss the potential relevance of these findings and in particular the interplay between the adoptive transfer of T cells and pre-transfer patient conditioning. PMID:23264903
NASA Astrophysics Data System (ADS)
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0
NASA Technical Reports Server (NTRS)
Paresce, Francesco (Editor)
1990-01-01
The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.
Band structure analysis of a thin plate with periodic arrangements of slender beams
NASA Astrophysics Data System (ADS)
Serrano, Ó.; Zaera, R.; Fernández-Sáez, J.
2018-04-01
This work analyzes the wave propagation in structures composed of a periodic arrangement of vertical beams rigidly joined to a plate substrate. Three different configurations for the distribution of the beams have been analyzed: square, triangular, and hexagonal. A dimensional analysis of the problem indicates the presence of three dimensionless groups of parameters controlling the response of the system. The main features of the wave propagation have been found using numerical procedures based on the Finite Element Method, through the application of the Bloch's theorem for the corresponding primitive unit cells. Illustrative examples of the effect of the different dimensionless parameters on the dynamic behavior of the system are presented, providing information relevant for design.
Process characterization and Design Space definition.
Hakemeyer, Christian; McKnight, Nathan; St John, Rick; Meier, Steven; Trexler-Schmidt, Melody; Kelley, Brian; Zettl, Frank; Puskeiler, Robert; Kleinjans, Annika; Lim, Fred; Wurth, Christine
2016-09-01
Quality by design (QbD) is a global regulatory initiative with the goal of enhancing pharmaceutical development through the proactive design of pharmaceutical manufacturing process and controls to consistently deliver the intended performance of the product. The principles of pharmaceutical development relevant to QbD are described in the ICH guidance documents (ICHQ8-11). An integrated set of risk assessments and their related elements developed at Roche/Genentech were designed to provide an overview of product and process knowledge for the production of a recombinant monoclonal antibody (MAb). This chapter describes the tools used for the characterization and validation of MAb manufacturing process under the QbD paradigm. This comprises risk assessments for the identification of potential Critical Process Parameters (pCPPs), statistically designed experimental studies as well as studies assessing the linkage of the unit operations. Outcome of the studies is the classification of process parameters according to their criticality and the definition of appropriate acceptable ranges of operation. The process and product knowledge gained in these studies can lead to the approval of a Design Space. Additionally, the information gained in these studies are used to define the 'impact' which the manufacturing process can have on the variability of the CQAs, which is used to define the testing and monitoring strategy. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
Design of a Mechanical-Tunable Filter Spectrometer for Noninvasive Glucose Measurement
NASA Astrophysics Data System (ADS)
Saptari, Vidi; Youcef-Toumi, Kamal
2004-05-01
The development of an accurate and reliable noninvasive near-infrared (NIR) glucose sensor hinges on the success in addressing the sensitivity and the specificity problems associated with the weak glucose signals and the overlapping NIR spectra. Spectroscopic hardware parameters most relevant to noninvasive blood glucose measurement are discussed, which include the optical throughput, integration time, spectral range, and the spectral resolution. We propose a unique spectroscopic system using a continuously rotating interference filter, which produces a signal-to-noise ratio of the order of 10^5 and is estimated to be the minimum required for successful in vivo glucose sensing. Using a classical least-squares algorithm and a spectral range between 2180 and 2312 nm, we extracted clinically relevant glucose concentrations in multicomponent solutions containing bovine serum albumin, triacetin, lactate, and urea.
Manufacturing complexity analysis
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1977-01-01
The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.
Design and development of a ceramic radial turbine for the AGT101
NASA Technical Reports Server (NTRS)
Finger, D. G.; Gupta, S. K.
1982-01-01
An acceptable and feasible ceramic turbine wheel design has been achieved, and the relevant temperature, stress, and success probability analyses are discussed. The design is described, the materials selection presented, and the engine cycle conditions analysis parameters shown. Measured MOR four-point strengths are indicated for room and elevated temperatures, and engine conditions are analyzed for various cycle states, materials, power states, turbine inlet temperatures, and speeds. An advanced gas turbine ceramic turbine rotor thermal and stress model is developed, and cumulative probability of survival is shown for first and third-year properties of SiC and Si3N4 rotors under different operating conditions, computed for both blade and hub regions. Temperature and stress distributions for steady-state and worst-case shutdown transients are depicted.
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Energy absorption capability and crashworthiness of composite material structures: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carruthers, J.J.; Kettle, A.P.; Robinson, A.M.
1998-10-01
The controlled brittle failure of thermosetting fiber-reinforced polymer composites can provide a very efficient energy absorption mechanism. Consequently, the use of these materials in crashworthy vehicle designs has been the subject of considerable interest. In this respect, their more widespread application has been limited by the complexity of their collapse behavior. This article reviews the current level of understanding i this field, including the correlations between failure mode and energy absorption, the principal material, geometric, and physical parameters relevant to crashworthy design and methods of predicting the energy absorption capability of polymer composites. Areas which require further investigation are identified.more » This review article contains 70 references.« less
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Decompression scenarios in a new underground transportation system.
Vernez, D
2000-10-01
The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
Radioisotope Power Systems Reference Book for Mission Designers and Planners
NASA Technical Reports Server (NTRS)
Lee, Young; Bairstow, Brian
2015-01-01
The RPS Program's Program Planning and Assessment (PPA) Office commissioned the Mission Analysis team to develop the Radioisotope Power Systems (RPS) Reference Book for Mission Planners and Designers to define a baseline of RPS technology capabilities with specific emphasis on performance parameters and technology readiness. The main objective of this book is to provide RPS technology information that could be utilized by future mission concept studies and concurrent engineering practices. A progress summary from the major branches of RPS technology research provides mission analysis teams with a vital tool for assessing the RPS trade space, and provides concurrent engineering centers with a consistent set of guidelines for RPS performance characteristics. This book will be iterated when substantial new information becomes available to ensure continued relevance, serving as one of the cornerstone products of the RPS PPA Office. This book updates the original 2011 internal document, using data from the relevant publicly released RPS technology references and consultations with RPS technologists. Each performance parameter and RPS product subsection has been reviewed and cleared by at least one subject matter representative. A virtual workshop was held to reach consensus on the scope and contents of the book, and the definitions and assumptions that should be used. The subject matter experts then reviewed and updated the appropriate sections of the book. The RPS Mission Analysis Team then performed further updates and crosschecked the book for consistency. Finally, a second virtual workshop was held to ensure all subject matter experts and stakeholders concurred on the contents.
NASA Technical Reports Server (NTRS)
Maynard, Nancy G.; Yurchak, Boris; Turi, Johan Mathis; Mathiesen, Svein D.; Aissi-Wespi, Rita L.
2004-01-01
As scientists and policy-makers from both indigenous and non-indigenous communities begin to build closer partnerships to address common sustainability issues such as the health impacts of climate change and anthropogenic activities, it becomes increasingly important to create shared information management systems which integrate all relevant factors for optimal information sharing and decision-making. This paper describes a new GIs-based system being designed to bring local and indigenous traditional knowledge together with scientific data and information, remote sensing, and information technologies to address health-related environment, weather, climate, pollution and land use change issues for improved decision/policy-making for reindeer husbandry. The system is building an easily-accessible archive of relevant current and historical, traditional, local and remotely-sensed and other data and observations for shared analysis, measuring, and monitoring parameters of interest. Protection of indigenous culturally sensitive information will be respected through appropriate data protocols. A mechanism which enables easy information sharing among all participants, which is real time and geo-referenced and which allows interconnectivity with remote sites is also being designed into the system for maximum communication among partners. A preliminary version of our system will be described for a Russian reindeer test site, which will include a combination of indigenous knowledge about local conditions and issues, remote sensing and ground-based data on such parameters as the vegetation state and distribution, snow cover, temperature, ice condition, and infrastructure.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Baldwin, Carryl L
2011-04-01
Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Two experiments describe the impact of acoustic and semantic parameters on ratings of perceived urgency, annoyance and alerting effectiveness and on alarm response speed. Within a simulated driving context, participants rated and responded to collision avoidance system (CAS) messages spoken by a female or male voice (experiments 1 and 2, respectively). Results indicated greater perceived urgency and faster alarm response times as intensity increased from -2 dB signal to noise (S/N) ratio to +10 dB S/N, although annoyance ratings increased as well. CAS semantic content interacted with alarm intensity, indicating that at lower intensity levels participants paid more attention to the semantic content. Results indicate that both acoustic and semantic parameters independently and interactively impact CAS alert perceptions in divided attention conditions and this work can inform auditory alarm design for effective hazard matching. Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Here, both acoustic and semantic parameters independently and interactively impacted CAS alert perceptions in divided attention conditions. This work can inform auditory alarm design for effective hazard matching. STATEMENT OF RELEVANCE: Results indicate that both acoustic parameters and semantic content can be used to design collision warnings with a range of urgency levels. Further, these results indicate that verbal warnings tailored to a specific hazard situation may improve hazard-matching capabilities without substantial trade-offs in perceived annoyance.
Hybrid wheat: quantitative genetic parameters and consequences for the design of breeding programs.
Longin, Carl Friedrich Horst; Gowda, Manje; Mühleisen, Jonathan; Ebmeyer, Erhard; Kazman, Ebrahim; Schachschneider, Ralf; Schacht, Johannes; Kirchhoff, Martin; Zhao, Yusheng; Reif, Jochen Christoph
2013-11-01
Commercial heterosis for grain yield is present in hybrid wheat but long-term competiveness of hybrid versus line breeding depends on the development of heterotic groups to improve hybrid prediction. Detailed knowledge of the amount of heterosis and quantitative genetic parameters are of paramount importance to assess the potential of hybrid breeding. Our objectives were to (1) examine the extent of midparent, better-parent and commercial heterosis in a vast population of 1,604 wheat (Triticum aestivum L.) hybrids and their parental elite inbred lines and (2) discuss the consequences of relevant quantitative parameters for the design of hybrid wheat breeding programs. Fifteen male lines were crossed in a factorial mating design with 120 female lines, resulting in 1,604 of the 1,800 potential single-cross hybrid combinations. The hybrids, their parents, and ten commercial wheat varieties were evaluated in multi-location field experiments for grain yield, plant height, heading time and susceptibility to frost, lodging, septoria tritici blotch, yellow rust, leaf rust, and powdery mildew at up to five locations. We observed that hybrids were superior to the mean of their parents for grain yield (10.7 %) and susceptibility to frost (-7.2 %), leaf rust (-8.4 %) and septoria tritici blotch (-9.3 %). Moreover, 69 hybrids significantly (P < 0.05) outyielded the best commercial inbred line variety underlining the potential of hybrid wheat breeding. The estimated quantitative genetic parameters suggest that the establishment of reciprocal recurrent selection programs is pivotal for a successful long-term hybrid wheat breeding.
Berniker, Max; Kording, Konrad P.
2011-01-01
Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574
The optimization of wireless power transmission: design and realization.
Jia, Zhiwei; Yan, Guozheng; Liu, Hua; Wang, Zhiwu; Jiang, Pingping; Shi, Yu
2012-09-01
A wireless power transmission system is regarded as a practical way of solving power-shortage problems in multifunctional active capsule endoscopes. The uniformity of magnetic flux density, frequency stability and orientation stability are used to evaluate power transmission stability, taking into consideration size and safety constraints. Magnetic field safety and temperature rise are also considered. Test benches are designed to measure the relevent parameters. Finally, a mathematical programming model in which these constraints are considered is proposed to improve transmission efficiency. To verify the feasibility of the proposed method, various systems for a wireless active capsule endoscope are designed and evaluated. The optimal power transmission system has the capability to supply continuously at least 500 mW of power with a transmission efficiency of 4.08%. The example validates the feasibility of the proposed method. Introduction of novel designs enables further improvement of this method. Copyright © 2012 John Wiley & Sons, Ltd.
Design principles for nuclease-deficient CRISPR-based transcriptional regulators.
Jensen, Michael K
2018-06-01
The engineering of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-CRISPR-associated proteins continues to expand the toolkit available for genome editing, reprogramming gene regulation, genome visualisation and epigenetic studies of living organisms. In this review, the emerging design principles on the use of nuclease-deficient CRISPR-based reprogramming of gene expression will be presented. The review will focus on the designs implemented in yeast both at the level of CRISPR proteins and guide RNA (gRNA), but will lend due credits to the seminal studies performed in other species where relevant. In addition to design principles, this review also highlights applications benefitting from the use of CRISPR-mediated transcriptional regulation and discusses the future directions to further expand the toolkit for nuclease-deficient reprogramming of genomes. As such, this review should be of general interest for experimentalists to get familiarised with the parameters underlying the power of reprogramming genomic functions by use of nuclease-deficient CRISPR technologies.
NASA Astrophysics Data System (ADS)
Rybus, Tomasz; Seweryn, Karol
2016-03-01
All devices designed to be used in space must be thoroughly tested in relevant conditions. For several classes of devices the reduced gravity conditions are the key factor. In early stages of development and later due to financial reasons, the tests need to be done on Earth. However, in Earth conditions it is impossible to obtain a different gravity field independent on all linear and rotational spatial coordinates. Therefore, various test-bed systems are used, with their design driven by the device's specific needs. One of such test-beds are planar air-bearing microgravity simulators. In such an approach, the tested objects (e.g., manipulators intended for on-orbit operations or vehicles simulating satellites in a close formation flight) are mounted on planar air-bearings that allow almost frictionless motion on a flat surface, thus simulating microgravity conditions in two dimensions. In this paper we present a comprehensive review of research activities related to planar air-bearing microgravity simulators, demonstrating achievements of the most active research groups and describing newest trends and ideas, such as tests of landing gears for low-g bodies. Major design parameters of air-bearing test-beds are also reviewed and a list of notable existing test-beds is presented.
Why the impact of mechanical stimuli on stem cells remains a challenge.
Goetzke, Roman; Sechi, Antonio; De Laporte, Laura; Neuss, Sabine; Wagner, Wolfgang
2018-05-04
Mechanical stimulation affects growth and differentiation of stem cells. This may be used to guide lineage-specific cell fate decisions and therefore opens fascinating opportunities for stem cell biology and regenerative medicine. Several studies demonstrated functional and molecular effects of mechanical stimulation but on first sight these results often appear to be inconsistent. Comparison of such studies is hampered by a multitude of relevant parameters that act in concert. There are notorious differences between species, cell types, and culture conditions. Furthermore, the utilized culture substrates have complex features, such as surface chemistry, elasticity, and topography. Cell culture substrates can vary from simple, flat materials to complex 3D scaffolds. Last but not least, mechanical forces can be applied with different frequency, amplitude, and strength. It is therefore a prerequisite to take all these parameters into consideration when ascribing their specific functional relevance-and to only modulate one parameter at the time if the relevance of this parameter is addressed. Such research questions can only be investigated by interdisciplinary cooperation. In this review, we focus particularly on mesenchymal stem cells and pluripotent stem cells to discuss relevant parameters that contribute to the kaleidoscope of mechanical stimulation of stem cells.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Zhang, Yu; Yang, Wei; Han, Dongsheng; Kim, Young-Il
2014-01-01
Environment monitoring is important for the safety of underground coal mine production, and it is also an important application of Wireless Sensor Networks (WSNs). We put forward an integrated environment monitoring system for underground coal mine, which uses the existing Cable Monitoring System (CMS) as the main body and the WSN with multi-parameter monitoring as the supplementary technique. As CMS techniques are mature, this paper mainly focuses on the WSN and the interconnection between the WSN and the CMS. In order to implement the WSN for underground coal mines, two work modes are designed: periodic inspection and interrupt service; the relevant supporting technologies, such as routing mechanism, collision avoidance, data aggregation, interconnection with the CMS, etc., are proposed and analyzed. As WSN nodes are limited in energy supply, calculation and processing power, an integrated network management scheme is designed in four aspects, i.e., topology management, location management, energy management and fault management. Experiments were carried out both in a laboratory and in a real underground coal mine. The test results indicate that the proposed integrated environment monitoring system for underground coal mines is feasible and all designs performed well as expected. PMID:25051037
Building an experimental model of the human body with non-physiological parameters.
Labuz, Joseph M; Moraes, Christopher; Mertz, David R; Leung, Brendan M; Takayama, Shuichi
2017-03-01
New advances in engineering and biomedical technology have enabled recent efforts to capture essential aspects of human physiology in microscale, in-vitro systems. The application of these advances to experimentally model complex processes in an integrated platform - commonly called a 'human-on-a-chip (HOC)' - requires that relevant compartments and parameters be sized correctly relative to each other and to the system as a whole. Empirical observation, theoretical treatments of resource distribution systems and natural experiments can all be used to inform rational design of such a system, but technical and fundamental challenges (e.g. small system blood volumes and context-dependent cell metabolism, respectively) pose substantial, unaddressed obstacles. Here, we put forth two fundamental principles for HOC design: inducing in-vivo -like cellular metabolic rates is necessary and may be accomplished in-vitro by limiting O 2 availability and that the effects of increased blood volumes on drug concentration can be mitigated through pharmacokinetics-based treatments of solute distribution. Combining these principles with natural observation and engineering workarounds, we derive a complete set of design criteria for a practically realizable, physiologically faithful, five-organ millionth-scale (× 10 -6 ) microfluidic model of the human body.
Building an experimental model of the human body with non-physiological parameters
Labuz, Joseph M.; Moraes, Christopher; Mertz, David R.; Leung, Brendan M.; Takayama, Shuichi
2017-01-01
New advances in engineering and biomedical technology have enabled recent efforts to capture essential aspects of human physiology in microscale, in-vitro systems. The application of these advances to experimentally model complex processes in an integrated platform — commonly called a ‘human-on-a-chip (HOC)’ — requires that relevant compartments and parameters be sized correctly relative to each other and to the system as a whole. Empirical observation, theoretical treatments of resource distribution systems and natural experiments can all be used to inform rational design of such a system, but technical and fundamental challenges (e.g. small system blood volumes and context-dependent cell metabolism, respectively) pose substantial, unaddressed obstacles. Here, we put forth two fundamental principles for HOC design: inducing in-vivo-like cellular metabolic rates is necessary and may be accomplished in-vitro by limiting O2 availability and that the effects of increased blood volumes on drug concentration can be mitigated through pharmacokinetics-based treatments of solute distribution. Combining these principles with natural observation and engineering workarounds, we derive a complete set of design criteria for a practically realizable, physiologically faithful, five-organ millionth-scale (× 10−6) microfluidic model of the human body. PMID:28713851
Ni, Ai; Cai, Jianwen
2018-07-01
Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.
Zhang, Yu; Yang, Wei; Han, Dongsheng; Kim, Young-Il
2014-07-21
Environment monitoring is important for the safety of underground coal mine production, and it is also an important application of Wireless Sensor Networks (WSNs). We put forward an integrated environment monitoring system for underground coal mine, which uses the existing Cable Monitoring System (CMS) as the main body and the WSN with multi-parameter monitoring as the supplementary technique. As CMS techniques are mature, this paper mainly focuses on the WSN and the interconnection between the WSN and the CMS. In order to implement the WSN for underground coal mines, two work modes are designed: periodic inspection and interrupt service; the relevant supporting technologies, such as routing mechanism, collision avoidance, data aggregation, interconnection with the CMS, etc., are proposed and analyzed. As WSN nodes are limited in energy supply, calculation and processing power, an integrated network management scheme is designed in four aspects, i.e., topology management, location management, energy management and fault management. Experiments were carried out both in a laboratory and in a real underground coal mine. The test results indicate that the proposed integrated environment monitoring system for underground coal mines is feasible and all designs performed well as expected.
Molecular system identification for enzyme directed evolution and design
NASA Astrophysics Data System (ADS)
Guan, Xiangying; Chakrabarti, Raj
2017-09-01
The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.
Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat
NASA Technical Reports Server (NTRS)
Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Joshua; Chodas, Paul W.; Englander, Jacob A.
2017-01-01
This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https://cneos.jpl.nasa.gov/ pd/cs/pdc17/. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or lowthrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft/asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth/spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection
An applied methodology for assessment of the sustainability of biomass district heating systems
NASA Astrophysics Data System (ADS)
Vallios, Ioannis; Tsoutsos, Theocharis; Papadakis, George
2016-03-01
In order to maximise the share of biomass in the energy supplying system, the designers should adopt the appropriate changes to the traditional systems and become more familiar with the design details of the biomass heating systems. The aim of this study is to present the development of methodology and its associated implementation in software that is useful for the design of biomass thermal conversion systems linked with district heating (DH) systems, taking into consideration the types of building structures and urban settlement layout around the plant. The methodology is based on a completely parametric logic, providing an impact assessment of variations in one or more technical and/or economic parameters and thus, facilitating a quick conclusion on the viability of this particular energy system. The essential energy parameters are presented and discussed for the design of biomass power and heat production system which are in connection with DH network, as well as for its environmental and economic evaluation (i.e. selectivity and viability of the relevant investment). Emphasis has been placed upon the technical parameters of biomass logistics, energy system's design, the economic details of the selected technology (integrated cogeneration combined cycle or direct combustion boiler), the DH network and peripheral equipment (thermal substations) and the greenhouse gas emissions. The purpose of this implementation is the assessment of the pertinent investment financial viability taking into account the available biomass feedstock, the economical and market conditions, and the capital/operating costs. As long as biomass resources (forest wood and cultivation products) are available and close to the settlement, disposal and transportation costs of biomass, remain low assuring the sustainability of such energy systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriger, A.
1978-01-31
This report is a part of the interim report documentation for the Global Spent Fuel Logistics System (GSFLS) study. The technical and financial considerations underlying a global spent fuel logistics systems have been studied and are reported. The Pacific Basin is used as a model throughout this report; however the stated methodology and, in many cases, considerations and conclusions are applicable to other global regions. Spent fuel discharge profiles for Pacific Basin Countries were used to determine the technical systems requirements for alternative concepts. Functional analyses and flows were generated to define both system design requirements and logistics parameters. Amore » technology review was made to ascertain the state-of-the-art of relevant GSFLS technical systems. Modular GSFLS facility designs were developed using the information generated from the functional analysis and technology review. The modular facility designs were used as a basis for siting and cost estimates for various GSFLS alternatives. Various GSFLS concepts were analyzed from a financial and economic perspective in order to provide total concepts costs and ascertain financial and economic sensitivities to key GSFLS variations. Results of the study include quantification of GSFLS facility and hardware requirements; drawings of relevant GSFLS facility designs; system cost estimates; financial reports - including user service charges; and comparative analyses of various GSFLS alternatives.« less
Clinical studies in restorative dentistry: New directions and new demands.
Opdam, N J M; Collares, K; Hickel, R; Bayne, S C; Loomans, B A; Cenci, M S; Lynch, C D; Correa, M B; Demarco, F; Schwendicke, F; Wilson, N H F
2018-01-01
Clinical research of restorative materials is confounded by problems of study designs, length of trials, type of information collected, and costs for trials, despite increasing numbers and considerable development of trials during the past 50 years. This opinion paper aims to discuss advantages and disadvantages of different study designs and outcomes for evaluating survival of dental restorations and to make recommendations for future study designs. Advantages and disadvantages of randomized trials, prospective and retrospective longitudinal studies, practice-based, pragmatic and cohort studies are addressed and discussed. The recommendations of the paper are that clinical trials should have rational control groups, include confounders such as patient risk factors in the data and analysis and should use outcome parameters relevant for profession and patients. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Taylor, Ulrike; Rehbock, Christoph; Streich, Carmen; Rath, Detlef; Barcikowski, Stephan
2014-09-01
Many studies have evaluated the toxicity of gold nanoparticles, although reliable predictions based on these results are rare. In order to overcome this problem, this article highlights strategies to improve comparability and standardization of nanotoxicological studies. To this end, it is proposed that we should adapt the nanomaterial to the addressed exposure scenario, using ligand-free nanoparticle references in order to differentiate ligand effects from size effects. Furthermore, surface-weighted particle dosing referenced to the biologically relevant parameter (e.g., cell number or organ mass) is proposed as the gold standard. In addition, it is recommended that we should shift the focus of toxicological experiments from 'live-dead' assays to the assessment of cell function, as this strategy allows observation of bioresponses at lower doses that are more relevant for in vivo scenarios.
Plant and animal accommodation for Space Station Laboratory
NASA Technical Reports Server (NTRS)
Olson, Richard L.; Gustan, Edith A.; Wiley, Lowell F.
1986-01-01
An extended study has been conducted with the goals of defining and analyzing relevant parameters and significant tradeoffs for the accommodation of nonhuman research aboard the NASA Space Station, as well as conducting tradeoff analyses for orbital reconfiguring or reoutfitting of the laboratory facility and developing laboratory designs and program plans. The two items exerting the greatest influence on nonhuman life sciences research were identified as the centrifuge and the specimen environmental control and life support system; both should be installed on the ground rather than in orbit.
NASA Astrophysics Data System (ADS)
Virtanen, P.; Vischi, F.; Strambini, E.; Carrega, M.; Giazotto, F.
2017-12-01
We discuss the quasiparticle entropy and heat capacity of a dirty superconductor/normal metal/superconductor junction. In the case of short junctions, the inverse proximity effect extending in the superconducting banks plays a crucial role in determining the thermodynamic quantities. In this case, commonly used approximations can violate thermodynamic relations between supercurrent and quasiparticle entropy. We provide analytical and numerical results as a function of different geometrical parameters. Quantitative estimates for the heat capacity can be relevant for the design of caloritronic devices or radiation sensor applications.
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.
Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discover tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out nonrelevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Privacy by design in personal health monitoring.
Nordgren, Anders
2015-06-01
The concept of privacy by design is becoming increasingly popular among regulators of information and communications technologies. This paper aims at analysing and discussing the ethical implications of this concept for personal health monitoring. I assume a privacy theory of restricted access and limited control. On the basis of this theory, I suggest a version of the concept of privacy by design that constitutes a middle road between what I call broad privacy by design and narrow privacy by design. The key feature of this approach is that it attempts to balance automated privacy protection and autonomously chosen privacy protection in a way that is context-sensitive. In personal health monitoring, this approach implies that in some contexts like medication assistance and monitoring of specific health parameters one single automatic option is legitimate, while in some other contexts, for example monitoring in which relatives are receivers of health-relevant information rather than health care professionals, a multi-choice approach stressing autonomy is warranted.
Anthropometric measurements in Iranian men.
Gharehdaghi, Jaber; Baazm, Maryam; Ghadipasha, Masoud; Solhi, Sadra; Toutounchian, Farhoud
2018-01-01
There is inevitable need for data regarding anthropometric measurements of each community's population. These anthropometric data have various applications, including health assessment, industrial designing, plastic & orthopedic surgery, nutritional studies, anatomical studies and forensic medicine investigations. Anthropometric parameters vary from race to race throughout the world, hence providing an anthropometric profile model of residents of different geographic regions seems to be necessary. To our knowledge, there is no report of bone parameters of the Iranian population. The present study was carried out to provide data on anthropomorphic bone parameters of the Iranian population, as a basis for future relevant studies. We calculated most of the known anthropometric parameters including skull, mandible, clavicle, scapula, humerus, radius, ulna, sacrum, hip, femur, tibia and fibula of 225 male corpses during a period of 2 years (2014-2016). Data expression was done as mean ± standard deviation. The results consist the first documented report on anthropometric bone measurement profile of Iranian male population, that can be considered a valuable source of data for future research on Iranian population in this regard. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Ulaczyk, Jan; Morawiec, Krzysztof; Zabierowski, Paweł; Drobiazg, Tomasz; Barreau, Nicolas
2017-09-01
A data mining approach is proposed as a useful tool for the control parameters analysis of the 3-stage CIGSe photovoltaic cell production process, in order to find variables that are the most relevant for cell electric parameters and efficiency. The analysed data set consists of stage duration times, heater power values as well as temperatures for the element sources and the substrate - there are 14 variables per sample in total. The most relevant variables of the process have been found based on the so-called random forest analysis with the application of the Boruta algorithm. 118 CIGSe samples, prepared at Institut des Matériaux Jean Rouxel, were analysed. The results are close to experimental knowledge on the CIGSe cells production process. They bring new evidence to production parameters of new cells and further research. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sleep mechanisms: Sleep deprivation and detection of changing levels of consciousness
NASA Technical Reports Server (NTRS)
Dement, W. C.; Barchas, J. D.
1972-01-01
An attempt was made to obtain information relevant to assessing the need to sleep and make up for lost sleep. Physiological and behavioral parameters were used as measuring parameters. Sleep deprivation in a restricted environment, derivation of data relevant to determining sleepiness from EEG, and the development of the Sanford Sleepiness Scale were discussed.
Shamout, Farah E; Pouliopoulos, Antonios N; Lee, Patrizia; Bonaccorsi, Simone; Towhidi, Leila; Krams, Rob; Choi, James J
2015-09-01
Sonoporation has been associated with drug delivery across cell membranes and into target cells, yet several limitations have prohibited further advancement of this technology. Higher delivery rates were associated with increased cellular death, thus implying a safety-efficacy trade-off. Meanwhile, there has been no reported study of safe in vitro sonoporation in a physiologically relevant flow environment. The objective of our study was not only to evaluate sonoporation under physiologically relevant flow conditions, such as fluid velocity, shear stress and temperature, but also to design ultrasound parameters that exploit the presence of flow to maximize sonoporation efficacy while minimizing or avoiding cellular damage. Human umbilical vein endothelial cells (EA.hy926) were seeded in flow chambers as a monolayer to mimic the endothelium. A peristaltic pump maintained a constant fluid velocity of 12.5 cm/s. A focused 0.5 MHz transducer was used to sonicate the cells, while an inserted focused 7.5 MHz passive cavitation detector monitored microbubble-seeded cavitation emissions. Under these conditions, propidium iodide, which is normally impermeable to the cell membrane, was traced to determine whether it could enter cells after sonication. Meanwhile, calcein-AM was used as a cell viability marker. A range of focused ultrasound parameters was explored, with several unique bioeffects observed: cell detachment, preservation of cell viability with no membrane penetration, cell death and preservation of cell viability with sonoporation. The parameters were then modified further to produce safe sonoporation with minimal cell death. To increase the number of favourable cavitation events, we lowered the ultrasound exposure pressure to 40 kPapk-neg and increased the number of cavitation nuclei by 50 times to produce a trans-membrane delivery rate of 62.6% ± 4.3% with a cell viability of 95% ± 4.2%. Furthermore, acoustic cavitation analysis showed that the low pressure sonication produced stable and non-inertial cavitation throughout the pulse sequence. To our knowledge, this is the first study to demonstrate a high drug delivery rate coupled with high cell viability in a physiologically relevant in vitro flow system. Copyright © 2015. Published by Elsevier Inc.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Portable kit for the assessment of gait parameters in daily telerehabilitation.
Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Grigioni, Mauro
2013-03-01
When designing a complete process of daily telerehabilitation, it should be borne in mind that patients should be furnished with properly designed methodologies for executing specific motion tasks and the assessment of the relevant parameters. In general, such a process should comprehend three basic elements in both the hospital and the home: (a) instrumented walkways, (b) walking aids or supports, and (c) equipment for the assessment of parameters. The objective, with gait being the focus, of this study was thus to design a simple, portable kit-as an alternative to the complex and expensive instruments currently used-to be easily interfaced or integrated with the instrumented walkways and aids/supports both for self-monitoring while patients are exercising with their own aids and for clinical reporting. The proposed system is a portable kit that furnishes useful parameters with feedback to both the patient and the trainer/therapist. Capable of being integrated with the most common mechanical tools used in motion rehabilitation (handrail, scales, walkways, etc.), it constantly monitors and quantitatively assesses progress in rehabilitation care. It is composed of one step counter, photo-emitter detectors, one central unit for collecting and processing the telemetrically transmitted data, and a software interface. The system has been successfully validated on 16 subjects at the second level of the Tinetti test in a clinical application for both home and the hospital. The portable kit can be used with different rehabilitation tools and on varying ground rugosity. Advantages include (a) very low cost, when compared with optoelectronic solutions or other portable devices, (b) very high accuracy, also for subjects with imbalance problems, compared with other commercial solutions, and (c) integration (compatibility) with any rehabilitative tool.
Effects of railway track design on the expected degradation: Parametric study on energy dissipation
NASA Astrophysics Data System (ADS)
Sadri, Mehran; Steenbergen, Michaël
2018-04-01
This paper studies the effect of railway track design parameters on the expected long-term degradation of track geometry. The study assumes a geometrically perfect and straight track along with spatial invariability, except for the presence of discrete sleepers. A frequency-domain two-layer model is used of a discretely supported rail coupled with a moving unsprung mass. The susceptibility of the track to degradation is objectively quantified by calculating the mechanical energy dissipated in the substructure under a moving train axle for variations of different track parameters. Results show that, apart from the operational train speed, the ballast/substructure stiffness is the most significant parameter influencing energy dissipation. Generally, the degradation increases with the train speed and with softer substructures. However, stiff subgrades appear more sensitive to particular train velocities, in a regime which is mostly relevant for conventional trains (100-200 km/h) and less for high-speed operation, where a stiff subgrade is always favorable and can reduce the sensitivity to degradation substantially, with roughly a factor up to 7. Also railpad stiffness, sleeper distance and rail cross-sectional properties are found to have considerable effect, with higher expected degradation rates for increasing railpad stiffness, increasing sleeper distance and decreasing rail profile bending stiffness. Unsprung vehicle mass and sleeper mass have no significant influence, however, only against the background of the assumption of an idealized (invariant and straight) track. Apart from dissipated mechanical energy, the suitability of the dynamic track stiffness is explored as an engineering parameter to assess the sensitivity to degradation. It is found that this quantity is inappropriate to assess the design of an idealized track.
MAPA: an interactive accelerator design code with GUI
NASA Astrophysics Data System (ADS)
Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.
1999-06-01
The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.
Incentive Control Strategies for Decision Problems with Parametric Uncertainties
NASA Astrophysics Data System (ADS)
Cansever, Derya H.
The central theme of this thesis is the design of incentive control policies in large scale systems with hierarchical decision structures, under the stipulation that the objective functionals of the agents at the lower level of the hierarchy are uncertain to the top-level controller (the leader). These uncertainties are modeled as a finite -dimensional parameter vector whose exact value constitutes private information to the relevant agent at the lower level. The approach we have adopted is to design incentive policies for the leader such that the dependence of the decision of the agents on the uncertain parameter is minimized. We have identified several classes of problems for which this approach is feasible. In particular, we have constructed policies whose performance is arbitrarily close to the solution of a version of the same problem that does not involve uncertainties. We have also shown that for a certain class of problem wherein the leader observes a linear combination of the agents' decisions, the leader can achieve the performance he would obtain if he had observed each decision separately.
Linear modeling of human hand-arm dynamics relevant to right-angle torque tool interaction.
Ay, Haluk; Sommerich, Carolyn M; Luscher, Anthony F
2013-10-01
A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use. Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury. A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment. A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture. Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain. Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Low-level radioactive waste technology: a selected, annotated bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fore, C.S.; Vaughan, N.D.; Hyder, L.K.
1980-10-01
This annotated bibliography of 447 references contains scientific, technical, economic, and regulatory information relevant to low-level radioactive waste technology. The bibliography focuses on environmental transport, disposal site, and waste treatment studies. The publication covers both domestic and foreign literature for the period 1952 to 1979. Major chapters selected are Chemical and Physical Aspects; Container Design and Performance; Disposal Site; Environmental Transport; General Studies and Reviews; Geology, Hydrology and Site Resources; Regulatory and Economic Aspects; Transportation Technology; Waste Production; and Waste Treatment. Specialized data fields have been incorporated into the data file to improve the ease and accuracy of locating pertinentmore » references. Specific radionuclides for which data are presented are listed in the Measured Radionuclides field, and specific parameters which affect the migration of these radionuclides are presented in the Measured Parameters field. In addition, each document referenced in this bibliography has been assigned a relevance number to facilitate sorting the documents according to their pertinence to low-level radioactive waste technology. The documents are rated 1, 2, 3, or 4, with 1 indicating direct applicability to low-level radioactive waste technology and 4 indicating that a considerable amount of interpretation is required for the information presented to be applied. The references within each chapter are arranged alphabetically by leading author, corporate affiliation, or title of the document. Indexes are provide for (1) author(s), (2) keywords, (3) subject category, (4) title, (5) geographic location, (6) measured parameters, (7) measured radionuclides, and (8) publication description.« less
Design of relative trajectories for in orbit proximity operations
NASA Astrophysics Data System (ADS)
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2018-04-01
This paper presents an innovative approach to design relative trajectories suitable for close-proximity operations in orbit, by assigning high-level constraints regarding their stability, shape and orientation. Specifically, this work is relevant to space mission scenarios, e.g. formation flying, on-orbit servicing, and active debris removal, which involve either the presence of two spacecraft carrying out coordinated maneuvers, or a servicing/recovery spacecraft (chaser) performing monitoring, rendezvous and docking with respect to another space object (target). In the above-mentioned scenarios, an important aspect is the capability of reducing collision risks and of providing robust and accurate relative navigation solutions. To this aim, the proposed approach exploits a relative motion model relevant to two-satellite formations, and developed in mean orbit parameters, which takes the perturbation effect due to secular Earth oblateness, as well as the motion of the target along a small-eccentricity orbit, into account. This model is used to design trajectories which ensure safe relative motion, to minimize collision risks and relax control requirements, providing at the same time favorable conditions, in terms of target-chaser relative observation geometry for pose determination and relative navigation with passive or active electro-optical sensors on board the chaser. Specifically, three design strategies are proposed in the context of a space target monitoring scenario, considering as design cases both operational spacecraft and debris, characterized by highly variable shape, size and absolute rotational dynamics. The effectiveness of the proposed design approach in providing favorable observation conditions for target-chaser relative pose estimation is demonstrated within a simulation environment which reproduces the designed target-chaser relative trajectory, the operation of an active LIDAR installed on board the chaser, and pose estimation algorithms.
Studying Spacecraft Charging via Numerical Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Moulton, D.; Meierbachtol, C.; Svyatskiy, D.; Vernon, L.
2015-12-01
The electrical charging of spacecraft due to bombarding charged particles can affect their performance and operation. We study this charging using CPIC; a particle-in-cell code specifically designed for studying plasma-material interactions [1]. CPIC is based on multi-block curvilinear meshes, resulting in near-optimal computational performance while maintaining geometric accuracy. Relevant plasma parameters are imported from the SHIELDS framework (currently under development at LANL), which simulates geomagnetic storms and substorms in the Earth's magnetosphere. Simulated spacecraft charging results of representative Van Allen Probe geometries using these plasma parameters will be presented, along with an overview of the code. [1] G.L. Delzanno, E. Camporeale, J.D. Moulton, J.E. Borovsky, E.A. MacDonald, and M.F. Thomsen, "CPIC: A Curvilinear Particle-In-Cell Code for Plasma-Material Interaction Studies," IEEE Trans. Plas. Sci., 41 (12), 3577 (2013).
Cross-section analysis of the Magnum-PSI plasma beam using a 2D multi-probe system
NASA Astrophysics Data System (ADS)
Costin, C.; Anita, V.; Ghiorghiu, F.; Popa, G.; De Temmerman, G.; van den Berg, M. A.; Scholten, J.; Brons, S.
2015-02-01
The linear plasma generator Magnum-PSI was designed for the study of plasma-surface interactions under relevant conditions of fusion devices. A key factor for such studies is the knowledge of a set of parameters that characterize the plasma interacting with the solid surface. This paper reports on the electrical diagnosis of the plasma beam in Magnum-PSI using a multi-probe system consisting of 64 probes arranged in a 2D square matrix. Cross-section distributions of floating potential and ion current intensity were registered for a hydrogen plasma beam under various discharge currents (80-175 A) and magnetic field strengths (0.47-1.41 T in the middle of the coils). Probe measurements revealed a high level of flexibility of plasma beam parameters with respect to the operating conditions.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
NASA Astrophysics Data System (ADS)
Buligin, Y. I.; Zharkova, M. G.; Alexeenko, L. N.
2017-01-01
In previous studies, experiments were carried out on the small-size models of cyclonic units, but now there completed the semi-industrial pilot plant ≪Cyclone≫, which would allow comparative testing of real samples of different shaped centrifugal dust-collectors and compare their efficiency. This original research plant is patented by authors. The aim of the study is to improve efficiency of exhaust gases collecting process, by creating improved designs of centrifugal dust collectors, providing for the possibility of regulation constructive parameters depending on the properties and characteristics of air-fuel field. The objectives of the study include identifying and studying the cyclonic apparatus association constructive parameters with their aerodynamic characteristics and dust-collecting efficiency. The article is very relevant, especially for future practical application of its results in dust removal technology.
Eichler, Marko; Römer, Robert; Grodrian, Andreas; Lemke, Karen; Nagel, Krees; Klages, Claus‐Peter; Gastrock, Gunter
2017-01-01
Abstract Although the great potential of droplet based microfluidic technologies for routine applications in industry and academia has been successfully demonstrated over the past years, its inherent potential is not fully exploited till now. Especially regarding to the droplet generation reproducibility and stability, two pivotally important parameters for successful applications, there is still a need for improvement. This is even more considerable when droplets are created to investigate tissue fragments or cell cultures (e.g. suspended cells or 3D cell cultures) over days or even weeks. In this study we present microfluidic chips composed of a plasma coated polymer, which allow surfactants‐free, highly reproducible and stable droplet generation from fluids like cell culture media. We demonstrate how different microfluidic designs and different flow rates (and flow rate ratios) affect the reproducibility of the droplet generation process and display the applicability for a wide variety of bio(techno)logically relevant media. PMID:29399017
Hall Thruster Thermal Modeling and Test Data Correlation
NASA Technical Reports Server (NTRS)
Myers, James; Kamhawi, Hani; Yim, John; Clayman, Lauren
2016-01-01
The life of Hall Effect thrusters are primarily limited by plasma erosion and thermal related failures. NASA Glenn Research Center (GRC) in cooperation with the Jet Propulsion Laboratory (JPL) have recently completed development of a Hall thruster with specific emphasis to mitigate these limitations. Extending the operational life of Hall thursters makes them more suitable for some of NASA's longer duration interplanetary missions. This paper documents the thermal model development, refinement and correlation of results with thruster test data. Correlation was achieved by minimizing uncertainties in model input and recognizing the relevant parameters for effective model tuning. Throughout the thruster design phase the model was used to evaluate design options and systematically reduce component temperatures. Hall thrusters are inherently complex assemblies of high temperature components relying on internal conduction and external radiation for heat dispersion and rejection. System solutions are necessary in most cases to fully assess the benefits and/or consequences of any potential design change. Thermal model correlation is critical since thruster operational parameters can push some components/materials beyond their temperature limits. This thruster incorporates a state-of-the-art magnetic shielding system to reduce plasma erosion and to a lesser extend power/heat deposition. Additionally a comprehensive thermal design strategy was employed to reduce temperatures of critical thruster components (primarily the magnet coils and the discharge channel). Long term wear testing is currently underway to assess the effectiveness of these systems and consequently thruster longevity.
Papantoniou Ir, Ioannis; Chai, Yoke Chin; Luyten, Frank P; Schrooten Ir, Jan
2013-08-01
The incorporation of Quality-by-Design (QbD) principles in tissue-engineering bioprocess development toward clinical use will ensure that manufactured constructs possess prerequisite quality characteristics addressing emerging regulatory requirements and ensuring the functional in vivo behavior. In this work, the QbD principles were applied on a manufacturing process step for the in vitro production of osteogenic three-dimensional (3D) hybrid scaffolds that involves cell matrix deposition on a 3D titanium (Ti) alloy scaffold. An osteogenic cell source (human periosteum-derived cells) cultured in a bioinstructive medium was used to functionalize regular Ti scaffolds in a perfusion bioreactor, resulting in an osteogenic hybrid carrier. A two-level three-factor fractional factorial design of experiments was employed to explore a range of production-relevant process conditions by simultaneously changing value levels of the following parameters: flow rate (0.5-2 mL/min), cell culture duration (7-21 days), and cell-seeding density (1.5×10(3)-3×10(3) cells/cm(2)). This approach allowed to evaluate the individual impact of the aforementioned process parameters upon key quality attributes of the produced hybrids, such as collagen production, mineralization level, and cell number. The use of a fractional factorial design approach helped create a design space in which hybrid scaffolds of predefined quality attributes may be robustly manufactured while minimizing the number of required experiments.
A review of bias flow liners for acoustic damping in gas turbine combustors
NASA Astrophysics Data System (ADS)
Lahiri, C.; Bake, F.
2017-07-01
The optimized design of bias flow liner is a key element for the development of low emission combustion systems in modern gas turbines and aero-engines. The research of bias flow liners has a fairly long history concerning both the parameter dependencies as well as the methods to model the acoustic behaviour of bias flow liners under the variety of different bias and grazing flow conditions. In order to establish an overview over the state of the art, this paper provides a comprehensive review about the published research on bias flow liners and modelling approaches with an extensive study of the most relevant parameters determining the acoustic behaviour of these liners. The paper starts with a historical description of available investigations aiming on the characterization of the bias flow absorption principle. This chronological compendium is extended by the recent and ongoing developments in this field. In a next step the fundamental acoustic property of bias flow liner in terms of the wall impedance is introduced and the different derivations and formulations of this impedance yielding the different published model descriptions are explained and compared. Finally, a parametric study reveals the most relevant parameters for the acoustic damping behaviour of bias flow liners and how this is reflected by the various model representations. Although the general trend of the investigated acoustic behaviour is captured by the different models fairly well for a certain range of parameters, in the transition region between the resonance dominated and the purely bias flow related regime all models lack the correct damping prediction. This seems to be connected to the proper implementation of the reactance as a function of bias flow Mach number.
Ren, Chong; McGrath, Colman; Jin, Lijian; Zhang, Chengfei; Yang, Yanqi
2016-09-01
This study aimed to systematically assess the parameter-specific effects of the diode low-level laser on human gingival fibroblasts (HGFs) and human periodontal ligament fibroblasts (HPDLFs). An extensive search was performed in major electronic databases including PubMed (1997), EMBASE (1947) and Web of Science (1956) and supplemented by hand search of reference lists and relevant laser journals for cell culture studies investigating the effect of diode low-level lasers on HGFs and HPDLFs published from January 1995 to December 2015. A total of 21 studies were included after screening 324 independent records, amongst which eight targeted HPDLFs and 13 focussed on HGFs. The diode low-level laser showed positive effects on promoting fibroblast proliferation and osteogenic differentiation and modulating cellular inflammation via changes in gene expression and the release of growth factors, bone-remodelling markers or inflammatory mediators in a parameter-dependent manner. Repeated irradiations with wavelengths in the red and near-infrared range and at an energy density below 16 J/cm(2) elicited favourable responses. However, considerable variations and weaknesses in the study designs and laser protocols limited the interstudy comparison and clinical transition. Current evidence showed that diode low-level lasers with adequate parameters stimulated the proliferation and modulated the inflammation of fibroblasts derived from human periodontal tissue. However, further in vitro studies with better designs and more appropriate study models and laser parameters are anticipated to provide sound evidence for clinical studies and practice.
Extending cluster Lot Quality Assurance Sampling designs for surveillance programs
Hund, Lauren; Pagano, Marcello
2014-01-01
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case. PMID:28609471
Finite element analysis of 6 large PMMA skull reconstructions: A multi-criteria evaluation approach.
Ridwan-Pramana, Angela; Marcián, Petr; Borák, Libor; Narra, Nathaniel; Forouzanfar, Tymour; Wolff, Jan
2017-01-01
In this study 6 pre-operative designs for PMMA based reconstructions of cranial defects were evaluated for their mechanical robustness using finite element modeling. Clinical experience and engineering principles were employed to create multiple plan options, which were subsequently computationally analyzed for mechanically relevant parameters under 50N loads: stress, strain and deformation in various components of the assembly. The factors assessed were: defect size, location and shape. The major variable in the cranioplasty assembly design was the arrangement of the fixation plates. An additional study variable introduced was the location of the 50N load within the implant area. It was found that in smaller defects, it was simpler to design a symmetric distribution of plates and under limited variability in load location it was possible to design an optimal for expected loads. However, for very large defects with complex shapes, the variability in the load locations introduces complications to the intuitive design of the optimal assembly. The study shows that it can be beneficial to incorporate multi design computational analyses to decide upon the most optimal plan for a clinical case.
Extending cluster lot quality assurance sampling designs for surveillance programs.
Hund, Lauren; Pagano, Marcello
2014-07-20
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.
Design and construction of miniature artificial ecosystem based on dynamic response optimization
NASA Astrophysics Data System (ADS)
Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu
The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and constructing a prototype of MAES is the reverse of mathematical modeling, and must have prerequisite assists from these results of computer simulation.
Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.
1999-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.
1998-01-01
A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.
NASA Astrophysics Data System (ADS)
Harting, Benjamin; Slenzka, Klaus
2012-07-01
To investigate the influence of microgravity environments on photosynthetic organisms we designed a 2 dimensional clinostatexperiment for a suspended cell culture of Chlamydomonas reinhardtii. A novel approach of online measurments concerning relevant parameters important for the clasification of photosynthesis was obtained. To adress the photosynthesis rate we installed and validated an optical mesurement system to monitor the evolution and consumption of dissolved oxygen. Simultaneously a PAM sensor to analyse the flourescence quantum yield of the photochemical reaction was integarted. Thus it was possible to directly classify important parameters of the phototrophic metabolism during clinorotation. The experiment design including well suited light conditions and further biochemical analysis were directly performed for microalgal cell cultures. Changes in the photosynthetic efficiancy of phototrophic cyanobacteria has been observed during parabolic flight campaign but the cause is already not understood. Explenations could be the dependency of gravitaxis by intracellular ionconcentartion or the existance of mechanosensitive ionchannels for example associated in chloroplasts of Chlamydomonas reinhardtii. The purpuse of the microalgal clinostat are studies in a qasi microgravity environment for the process design of future bioregenerative life suport systems in spaceflight missions. First results has indicated the need for special nourishment of the cell culture during microgravity experiments. Further data will be presented during the assembly.
Design principles for nuclease-deficient CRISPR-based transcriptional regulators
Jensen, Michael K
2018-01-01
Abstract The engineering of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-CRISPR-associated proteins continues to expand the toolkit available for genome editing, reprogramming gene regulation, genome visualisation and epigenetic studies of living organisms. In this review, the emerging design principles on the use of nuclease-deficient CRISPR-based reprogramming of gene expression will be presented. The review will focus on the designs implemented in yeast both at the level of CRISPR proteins and guide RNA (gRNA), but will lend due credits to the seminal studies performed in other species where relevant. In addition to design principles, this review also highlights applications benefitting from the use of CRISPR-mediated transcriptional regulation and discusses the future directions to further expand the toolkit for nuclease-deficient reprogramming of genomes. As such, this review should be of general interest for experimentalists to get familiarised with the parameters underlying the power of reprogramming genomic functions by use of nuclease-deficient CRISPR technologies. PMID:29726937
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
Payne, J W
1986-01-01
This paper discusses the concept of smugglins, i.e., molecules that are formed by attaching to, or incorporating into, normal cell nutrients varied moieties as a means of transporting otherwise impermeant substances into cells. Examples of antimicrobial smugglins that use this principle in Nature are described. The rationally designed antibiotic smugglins investigated to date are critically reviewed. Criteria for the design of optimal peptide carriers for antimicrobial smugglins are considered. A computer-linked, continuous-flow system for rapid measurement of the kinetic parameters for substrate transport via peptide permeases is described which, together with current molecular, genetic and biochemical techniques, now provides the means to obtain the information on which rational design should be based; examples are given for Escherichia coli and Candida albicans. After an uncertain commercial start, it now seems likely that increasing understanding of the uptake processes and other relevant features will make drug targeting using peptide carriers an achievable goal. Certainly their widespread occurrence in Nature should provide added incentive for the design of synthetic smugglins.
NASA Astrophysics Data System (ADS)
Killoran, N.; Huelga, S. F.; Plenio, M. B.
2015-10-01
Recent evidence suggests that quantum effects may have functional importance in biological light-harvesting systems. Along with delocalized electronic excitations, it is now suspected that quantum coherent interactions with certain near-resonant vibrations may contribute to light-harvesting performance. However, the actual quantum advantage offered by such coherent vibrational interactions has not yet been established. We investigate a quantum design principle, whereby coherent exchange of single energy quanta between electronic and vibrational degrees of freedom can enhance a light-harvesting system's power above what is possible by thermal mechanisms alone. We present a prototype quantum heat engine which cleanly illustrates this quantum design principle and quantifies its quantum advantage using thermodynamic measures of performance. We also demonstrate the principle's relevance in parameter regimes connected to natural light-harvesting structures.
Maximizing fluorescence collection efficiency in multiphoton microscopy
Zinter, Joseph P.; Levene, Michael J.
2011-01-01
Understanding fluorescence propagation through a multiphoton microscope is of critical importance in designing high performance systems capable of deep tissue imaging. Optical models of a scattering tissue sample and the Olympus 20X 0.95NA microscope objective were used to simulate fluorescence propagation as a function of imaging depth for physiologically relevant scattering parameters. The spatio-angular distribution of fluorescence at the objective back aperture derived from these simulations was used to design a simple, maximally efficient post-objective fluorescence collection system. Monte Carlo simulations corroborated by data from experimental tissue phantoms demonstrate collection efficiency improvements of 50% – 90% over conventional, non-optimized fluorescence collection geometries at large imaging depths. Imaging performance was verified by imaging layer V neurons in mouse cortex to a depth of 850 μm. PMID:21934897
Nass, C; Lee, K M
2001-09-01
Would people exhibit similarity-attraction and consistency-attraction toward unambiguously computer-generated speech even when personality is clearly not relevant? In Experiment 1, participants (extrovert or introvert) heard a synthesized voice (extrovert or introvert) on a book-buying Web site. Participants accurately recognized personality cues in text to speech and showed similarity-attraction in their evaluation of the computer voice, the book reviews, and the reviewer. Experiment 2, in a Web auction context, added personality of the text to the previous design. The results replicated Experiment 1 and demonstrated consistency (voice and text personality)-attraction. To maximize liking and trust, designers should set parameters, for example, words per minute or frequency range, that create a personality that is consistent with the user and the content being presented.
Mapping networks of light-dark transition in LOV photoreceptors.
Kaur Grewal, Rajdeep; Mitra, Devrani; Roy, Soumen
2015-11-15
In optogenetics, designing modules of long or short signaling state lifetime is necessary for control over precise cellular events. A critical parameter for designing artificial or synthetic photoreceptors is the signaling state lifetime of photosensor modules. Design and engineering of biologically relevant artificial photoreceptors is based on signaling mechanisms characteristic of naturally occurring photoreceptors. Therefore identifying residues important for light-dark transition is a definite first step towards rational design of synthetic photoreceptors. A thorough grasp of detailed mechanisms of photo induced signaling process would be immensely helpful in understanding the behaviour of organisms. Herein, we introduce the technique of differential networks. We identify key biological interactions, using light-oxygen-voltage domains of all organisms whose dark and light state crystal structures are simultaneously available. Even though structural differences between dark and light states are subtle (other than the covalent bond formation between flavin chromophore and active site Cysteine), our results successfully capture functionally relevant residues and are in complete agreement with experimental findings from literature. Additionally, using sequence-structure alignments, we predict functional significance of interactions found to be important from network perspective yet awaiting experimental validation. Our approach would not only help in minimizing extensive photo-cycle kinetics procedure but is also helpful in providing first-hand information on the fundamentals of photo-adaptation and rational design of synthetic photoreceptors in optogenetics. devrani.dbs@presiuniv.ac.in or soumen@jcbose.ac.in Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Models of dyadic social interaction.
Griffin, Dale; Gonzalez, Richard
2003-01-01
We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382
An IPv6 Multihomed Host for Outbound Traffic
NASA Astrophysics Data System (ADS)
Chen, Chin-Ling; Cao, Sheng-Lung
Though the technology of IPv6 network has become mature in recent years, it still takes long to dispose IPv6 in an all-round way in the internet. In this research, we have designed an IPv6 multihomed host architecture to connect both IPv6 network and 6to4 network. This paper describes a load balance mechanism that allows applications on multihomed devices to utilize the individual networks efficiently to transmit streams that could be part of a session. We experiment the relevant parameters in the IPv6 testbed environment to demonstrate its effectiveness.
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Generotti, Silvia; Cirlini, Martina; Malachova, Alexandra; Sulyok, Michael; Berthiller, Franz; Dall’Asta, Chiara; Suman, Michele
2015-01-01
In the scientific field, there is a progressive awareness about the potential implications of food processing on mycotoxins especially concerning thermal treatments. High temperatures may cause, in fact, transformation or degradation of these compounds. This work is aimed to study the fate of mycotoxins during bakery processing, focusing on deoxynivalenol (DON) and deoxynivalenol-3-glucoside (DON3Glc), along the chain of industrial rusk production. Starting from naturally contaminated bran, we studied how concentrations of DON and DON3Glc are influenced by modifying ingredients and operative conditions. The experiments were performed using statistical Design of Experiment (DoE) schemes to synergistically explore the relationship between mycotoxin reduction and the indicated processing transformation parameters. All samples collected during pilot plant experiments were analyzed with an LC-MS/MS multimycotoxin method. The obtained model shows a good fitting, giving back relevant information in terms of optimization of the industrial production process, in particular suggesting that time and temperature in baking and toasting steps are highly relevant for minimizing mycotoxin level in rusks. A reduction up to 30% for DON and DON3Glc content in the finished product was observed within an acceptable technological range. PMID:26213969
Generotti, Silvia; Cirlini, Martina; Malachova, Alexandra; Sulyok, Michael; Berthiller, Franz; Dall'Asta, Chiara; Suman, Michele
2015-07-24
In the scientific field, there is a progressive awareness about the potential implications of food processing on mycotoxins especially concerning thermal treatments. High temperatures may cause, in fact, transformation or degradation of these compounds. This work is aimed to study the fate of mycotoxins during bakery processing, focusing on deoxynivalenol (DON) and deoxynivalenol-3-glucoside (DON3Glc), along the chain of industrial rusk production. Starting from naturally contaminated bran, we studied how concentrations of DON and DON3Glc are influenced by modifying ingredients and operative conditions. The experiments were performed using statistical Design of Experiment (DoE) schemes to synergistically explore the relationship between mycotoxin reduction and the indicated processing transformation parameters. All samples collected during pilot plant experiments were analyzed with an LC-MS/MS multimycotoxin method. The obtained model shows a good fitting, giving back relevant information in terms of optimization of the industrial production process, in particular suggesting that time and temperature in baking and toasting steps are highly relevant for minimizing mycotoxin level in rusks. A reduction up to 30% for DON and DON3Glc content in the finished product was observed within an acceptable technological range.
Antal, Péter; Kiszel, Petra Sz.; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F.; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba
2012-01-01
Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called Bayesian network based Bayesian multilevel analysis of relevance (BN-BMLA). This method uses Bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the Bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated. With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2–1.8); p = 3×10−4). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics. In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance. PMID:22432035
A graphical approach to radio frequency quadrupole design
NASA Astrophysics Data System (ADS)
Turemen, G.; Unel, G.; Yasatekin, B.
2015-07-01
The design of a radio frequency quadrupole, an important section of all ion accelerators, and the calculation of its beam dynamics properties can be achieved using the existing computational tools. These programs, originally designed in 1980s, show effects of aging in their user interfaces and in their output. The authors believe there is room for improvement in both design techniques using a graphical approach and in the amount of analytical calculations before going into CPU burning finite element analysis techniques. Additionally an emphasis on the graphical method of controlling the evolution of the relevant parameters using the drag-to-change paradigm is bound to be beneficial to the designer. A computer code, named DEMIRCI, has been written in C++ to demonstrate these ideas. This tool has been used in the design of Turkish Atomic Energy Authority (TAEK)'s 1.5 MeV proton beamline at Saraykoy Nuclear Research and Training Center (SANAEM). DEMIRCI starts with a simple analytical model, calculates the RFQ behavior and produces 3D design files that can be fed to a milling machine. The paper discusses the experience gained during design process of SANAEM Project Prometheus (SPP) RFQ and underlines some of DEMIRCI's capabilities.
Henriques, David; Alonso-Del-Real, Javier; Querol, Amparo; Balsa-Canto, Eva
2018-01-01
Wineries face unprecedented challenges due to new market demands and climate change effects on wine quality. New yeast starters including non-conventional Saccharomyces species, such as S. kudriavzevii , may contribute to deal with some of these challenges. The design of new fermentations using non-conventional yeasts requires an improved understanding of the physiology and metabolism of these cells. Dynamic modeling brings the potential of exploring the most relevant mechanisms and designing optimal processes more systematically. In this work we explore mechanisms by means of a model selection, reduction and cross-validation pipeline which enables to dissect the most relevant fermentation features for the species under consideration, Saccharomyces cerevisiae T73 and Saccharomyces kudriavzevii CR85. The pipeline involved the comparison of a collection of models which incorporate several alternative mechanisms with emphasis on the inhibitory effects due to temperature and ethanol. We focused on defining a minimal model with the minimum number of parameters, to maximize the identifiability and the quality of cross-validation. The selected model was then used to highlight differences in behavior between species. The analysis of model parameters would indicate that the specific growth rate and the transport of hexoses at initial times are higher for S. cervisiae T73 while S. kudriavzevii CR85 diverts more flux for glycerol production and cellular maintenance. As a result, the fermentations with S. kudriavzevii CR85 are typically slower; produce less ethanol but higher glycerol. Finally, we also explored optimal initial inoculation and process temperature to find the best compromise between final product characteristics and fermentation duration. Results reveal that the production of glycerol is distinctive in S. kudriavzevii CR85, it was not possible to achieve the same production of glycerol with S. cervisiae T73 in any of the conditions tested. This result brings the idea that the optimal design of mixed cultures may have an enormous potential for the improvement of final wine quality.
Henriques, David; Alonso-del-Real, Javier; Querol, Amparo; Balsa-Canto, Eva
2018-01-01
Wineries face unprecedented challenges due to new market demands and climate change effects on wine quality. New yeast starters including non-conventional Saccharomyces species, such as S. kudriavzevii, may contribute to deal with some of these challenges. The design of new fermentations using non-conventional yeasts requires an improved understanding of the physiology and metabolism of these cells. Dynamic modeling brings the potential of exploring the most relevant mechanisms and designing optimal processes more systematically. In this work we explore mechanisms by means of a model selection, reduction and cross-validation pipeline which enables to dissect the most relevant fermentation features for the species under consideration, Saccharomyces cerevisiae T73 and Saccharomyces kudriavzevii CR85. The pipeline involved the comparison of a collection of models which incorporate several alternative mechanisms with emphasis on the inhibitory effects due to temperature and ethanol. We focused on defining a minimal model with the minimum number of parameters, to maximize the identifiability and the quality of cross-validation. The selected model was then used to highlight differences in behavior between species. The analysis of model parameters would indicate that the specific growth rate and the transport of hexoses at initial times are higher for S. cervisiae T73 while S. kudriavzevii CR85 diverts more flux for glycerol production and cellular maintenance. As a result, the fermentations with S. kudriavzevii CR85 are typically slower; produce less ethanol but higher glycerol. Finally, we also explored optimal initial inoculation and process temperature to find the best compromise between final product characteristics and fermentation duration. Results reveal that the production of glycerol is distinctive in S. kudriavzevii CR85, it was not possible to achieve the same production of glycerol with S. cervisiae T73 in any of the conditions tested. This result brings the idea that the optimal design of mixed cultures may have an enormous potential for the improvement of final wine quality. PMID:29456524
Integrated approach for stress based lifing of aero gas turbine blades
NASA Astrophysics Data System (ADS)
Abu, Abdullahi Obonyegba
In order to analyse the turbine blade life, the damage due to the combined thermal and mechanical loads should be adequately accounted for. This is more challenging when detailed component geometry is limited. Therefore, a compromise between the level of geometric detail and the complexity of the lifing method to be implemented would be necessary. This research focuses on how the life assessment of aero engine turbine blades can be done, considering the balance between available design inputs and adequate level of fidelity. Accordingly, the thesis contributes to developing a generic turbine blade lifing method that is based on the engine thermodynamic cycle; as well as integrating critical design/technological factors and operational parameters that influence the aero engine blade life. To this end, thermo-mechanical fatigue was identified as the critical damage phenomenon driving the life of the turbine blade.. The developed approach integrates software tools and numerical models created using the minimum design information typically available at the early design stages. Using finite element analysis of an idealised blade geometry, the approach captures relevant impacts of thermal gradients and thermal stresses that contribute to the thermo-mechanical fatigue damage on the gas turbine blade. The blade life is evaluated using the Neu/Sehitoglu thermo-mechanical fatigue model that considers damage accumulation due to fatigue, oxidation, and creep. The leading edge is examined as a critical part of the blade to estimate the damage severity for different design factors and operational parameters. The outputs of the research can be used to better understand how the environment and the operating conditions of the aircraft affect the blade life consumption and therefore what is the impact on the maintenance cost and the availability of the propulsion system. This research also finds that the environmental (oxidation) effect drives the blade life and the blade coolant side was the critical location. Furthermore, a parametric and sensitivity study of the Neu/Sehitoglu model parameters suggests that in addition to four previously reported parameters, the sensitivity of the phasing to oxidation damage would be critical to overall blade life..
Assessing cost-effectiveness of specific LID practice designs in response to large storm events
NASA Astrophysics Data System (ADS)
Chui, Ting Fong May; Liu, Xin; Zhan, Wenting
2016-02-01
Low impact development (LID) practices have become more important in urban stormwater management worldwide. However, most research on design optimization focuses on relatively large scale, and there is very limited information or guideline regarding individual LID practice designs (i.e., optimal depth, width and length). The objective of this study is to identify the optimal design by assessing the hydrological performance and the cost-effectiveness of different designs of LID practices at a household or business scale, and to analyze the sensitivity of the hydrological performance and the cost of the optimal design to different model and design parameters. First, EPA SWMM, automatically controlled by MATLAB, is used to obtain the peak runoff of different designs of three specific LID practices (i.e., green roof, bioretention and porous pavement) under different design storms (i.e., 2 yr and 50 yr design storms of Hong Kong, China and Seattle, U.S.). Then, life cycle cost is estimated for the different designs, and the optimal design, defined as the design with the lowest cost and at least 20% peak runoff reduction, is identified. Finally, sensitivity of the optimal design to the different design parameters is examined. The optimal design of green roof tends to be larger in area but thinner, while the optimal designs of bioretention and porous pavement tend to be smaller in area. To handle larger storms, however, it is more effective to increase the green roof depth, and to increase the area of the bioretention and porous pavement. Porous pavement is the most cost-effective for peak flow reduction, followed by bioretention and then green roof. The cost-effectiveness, measured as the peak runoff reduction/thousand Dollars of LID practices in Hong Kong (e.g., 0.02 L/103 US s, 0.15 L/103 US s and 0.93 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm) is lower than that in Seattle (e.g., 0.03 L/103 US s, 0.29 L/103 US s and 1.58 L/103 US s for green roof, bioretention and porous pavement for 2 yr storm). The optimal designs are influenced by the model and design parameters (i.e., initial saturation, hydraulic conductivity and berm height). However, it overall does not affect the main trends and key insights derived, and the results are therefore generic and relevant to the household/business-scale optimal design of LID practices worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anand, G.; Erickson, D.C.
1999-07-01
The distillation column is a key component of ammonia-water absorption units including advanced generator-absorber heat exchange (GAX) cycle heat pumps. The design of the distillation column is critical to unit performance, size, and cost. The distillation column can be designed with random packing, structured packing, or various tray configurations. A sieve-tray distillation column is the least complicated tray design and is less costly than high-efficiency packing. Substantial literature is available on sieve tray design and performance. However, most of the correlations and design recommendations were developed for large industrial hydrocarbon systems and are generally not directly applicable to the compactmore » ammonia-water column discussed here. The correlations were reviewed and modified as appropriate for this application, and a sieve-tray design model was developed. This paper presents the sieve-tray design methodology for highly compact ammonia-water columns. A conceptual design of the distillation column for an 8 ton vapor exchange (VX) GAX heat pump is presented, illustrating relevant design parameters and trends. The design process revealed several issues that have to be investigated experimentally to design the final optimized rectifier. Validation of flooding and weeping limits and tray/point efficiencies are of primary importance.« less
Kendall, W.L.; Nichols, J.D.
2002-01-01
Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.
NASA Astrophysics Data System (ADS)
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
Fast prediction and evaluation of eccentric inspirals using reduced-order models
NASA Astrophysics Data System (ADS)
Barta, Dániel; Vasúth, Mátyás
2018-06-01
A large number of theoretically predicted waveforms are required by matched-filtering searches for the gravitational-wave signals produced by compact binary coalescence. In order to substantially alleviate the computational burden in gravitational-wave searches and parameter estimation without degrading the signal detectability, we propose a novel reduced-order-model (ROM) approach with applications to adiabatic 3PN-accurate inspiral waveforms of nonspinning sources that evolve on either highly or slightly eccentric orbits. We provide a singular-value decomposition-based reduced-basis method in the frequency domain to generate reduced-order approximations of any gravitational waves with acceptable accuracy and precision within the parameter range of the model. We construct efficient reduced bases comprised of a relatively small number of the most relevant waveforms over three-dimensional parameter-space covered by the template bank (total mass 2.15 M⊙≤M ≤215 M⊙ , mass ratio 0.01 ≤q ≤1 , and initial orbital eccentricity 0 ≤e0≤0.95 ). The ROM is designed to predict signals in the frequency band from 10 Hz to 2 kHz for aLIGO and aVirgo design sensitivity. Beside moderating the data reduction, finer sampling of fiducial templates improves the accuracy of surrogates. Considerable increase in the speedup from several hundreds to thousands can be achieved by evaluating surrogates for low-mass systems especially when combined with high-eccentricity.
Kim, Tae Young; Badsha, Md. Alamgir; Yoon, Junho; Lee, Seon Young; Jun, Young Chul; Hwangbo, Chang Kwon
2016-01-01
We propose a general, easy-to-implement scheme for broadband coherent perfect absorption (CPA) using epsilon-near-zero (ENZ) multilayer films. Specifically, we employ indium tin oxide (ITO) as a tunable ENZ material, and theoretically investigate CPA in the near-infrared region. We first derive general CPA conditions using the scattering matrix and the admittance matching methods. Then, by combining these two methods, we extract analytic expressions for all relevant parameters for CPA. Based on this theoretical framework, we proceed to study ENZ CPA in a single layer ITO film and apply it to all-optical switching. Finally, using an ITO multilayer of different ENZ wavelengths, we implement broadband ENZ CPA structures and investigate multi-wavelength all-optical switching in the technologically important telecommunication window. In our design, the admittance matching diagram was employed to graphically extract not only the structural parameters (the film thicknesses and incident angles), but also the input beam parameters (the irradiance ratio and phase difference between two input beams). We find that the multi-wavelength all-optical switching in our broadband ENZ CPA system can be fully controlled by the phase difference between two input beams. The simple but general design principles and analyses in this work can be widely used in various thin-film devices. PMID:26965195
Pallagi, Edina; Ambrus, Rita; Szabó-Révész, Piroska; Csóka, Ildikó
2015-08-01
Regulatory science based pharmaceutical development and product manufacturing is highly recommended by the authorities nowadays. The aim of this study was to adapt regulatory science even in the nano-pharmaceutical early development. Authors applied the quality by design (QbD) concept in the early development phase of nano-systems, where the illustration material was meloxicam. The meloxicam nanoparticles produced by co-grinding method for nasal administration were studied according to the QbD policy and the QbD based risk assessment (RA) was performed. The steps were implemented according to the relevant regulatory guidelines (quality target product profile (QTPP) determination, selection of critical quality attributes (CQAs) and critical process parameters (CPPs)) and a special software (Lean QbD Software(®)) was used for the RA, which represents a novelty in this field. The RA was able to predict and identify theoretically the factors (e.g. sample composition, production method parameters, etc.) which have the highest impact on the desired meloxicam-product quality. The results of the practical research justified the theoretical prediction. This method can improve pharmaceutical nano-developments by achieving shorter development time, lower cost, saving human resource efforts and more effective target-orientation. It makes possible focusing the resources on the selected parameters and area during the practical product development. Copyright © 2015 Elsevier B.V. All rights reserved.
Growth-rate dependent global effects on gene expression in bacteria
Klumpp, Stefan; Zhang, Zhongge; Hwa, Terence
2010-01-01
Summary Bacterial gene expression depends not only on specific regulations but also directly on bacterial growth, because important global parameters such as the abundance of RNA polymerases and ribosomes are all growth-rate dependent. Understanding these global effects is necessary for a quantitative understanding of gene regulation and for the robust design of synthetic genetic circuits. The observed growth-rate dependence of constitutive gene expression can be explained by a simple model using the measured growth-rate dependence of the relevant cellular parameters. More complex growth dependences for genetic circuits involving activators, repressors and feedback control were analyzed, and salient features were verified experimentally using synthetic circuits. The results suggest a novel feedback mechanism mediated by general growth-dependent effects and not requiring explicit gene regulation, if the expressed protein affects cell growth. This mechanism can lead to growth bistability and promote the acquisition of important physiological functions such as antibiotic resistance and tolerance (persistence). PMID:20064380
Hellesen, C; Skiba, M; Dzysiuk, N; Weiszflog, M; Hjalmarsson, A; Ericsson, G; Conroy, S; Andersson-Sundén, E; Eriksson, J; Binda, F
2014-11-01
The fuel ion ratio nt/nd is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., nt/nd = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands that a DT spectrometer has to fulfill to be able to determine nt/nd with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.
NASA Astrophysics Data System (ADS)
Wan, Yu; Jin, Kai; Ahmad, Talha J.; Black, Michael J.; Xu, Zhiping
2017-03-01
Fluidic environment is encountered for mechanical components in many circumstances, which not only damps the oscillation but also modulates their dynamical behaviors through hydrodynamic interactions. In this study, we examine energy transfer and motion synchronization between two mechanical micro-oscillators by performing thermal lattice-Boltzmann simulations. The coefficient of inter-oscillator energy transfer is measured to quantify the strength of microhydrodynamic coupling, which depends on their distance and fluid properties such as density and viscosity. Synchronized motion of the oscillators is observed in the simulations for typical parameter sets in relevant applications, with the formation and loss of stable anti-phase synchronization controlled by the oscillating frequency, amplitude, and hydrodynamic coupling strength. The critical ranges of key parameters to assure efficient energy transfer or highly synchronized motion are predicted. These findings could be used to advise mechanical design of passive and active devices that operate in fluid.
NASA Astrophysics Data System (ADS)
Hellesen, C.; Skiba, M.; Dzysiuk, N.; Weiszflog, M.; Hjalmarsson, A.; Ericsson, G.; Conroy, S.; Andersson-Sundén, E.; Eriksson, J.; Binda, F.
2014-11-01
The fuel ion ratio nt/nd is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., nt/nd = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands that a DT spectrometer has to fulfill to be able to determine nt/nd with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.
A compendium of chameleon constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk
2016-11-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less
Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems
NASA Technical Reports Server (NTRS)
Scheid, R. E., Jr.; Rodriguez, G.
1985-01-01
Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.
NASA Technical Reports Server (NTRS)
Boeer, K. W.
1975-01-01
Solar cells may be used to convert sunlight directly into electrical energy and into lowgrade heat to be used for large-scale terrestrial solar-energy conversion. Both forms of energy can be utilized if such cells are deployed in close proximity to the consumer (rooftop). Cadmium-sulfide/copper-sulfide (CdS/Cu2S) solar cells are an example of cells which may be produced inexpensively enough to become economically attractive. Cell parameters relevant for combined solar conversion are presented. Critical issues, such as production yield, life expectancy, and stability of performance, are discussed. Systems-design parameters related to operating temperatures are analyzed. First results obtained on Solar One, the experimental house of the University of Delaware, are given. Economic aspects are discussed. Different modes of operation are discussed in respect to the power utility and consumer incentives.
Observable gravitational waves in pre-big bang cosmology: an update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperini, M., E-mail: gasperini@ba.infn.it
In the light of the recent results concerning CMB observations and GW detection we address the question of whether it is possible, in a self-consistent inflationary framework, to simultaneously generate a spectrum of scalar metric perturbations in agreement with Planck data and a stochastic background of primordial gravitational radiation compatible with the design sensitivity of aLIGO/Virgo and/or eLISA. We suggest that this is possible in a string cosmology context, for a wide region of the parameter space of the so-called pre-big bang models. We also discuss the associated values of the tensor-to-scalar ratio relevant to the CMB polarization experiments. Wemore » conclude that future, cross-correlated results from CMB observations and GW detectors will be able to confirm or disprove pre-big bang models and—in any case—will impose new significant constraints on the basic string theory/cosmology parameters.« less
Viger, Roland J.
2008-01-01
This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.
Sensor data validation and reconstruction. Phase 1: System architecture study
NASA Technical Reports Server (NTRS)
1991-01-01
The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language.
Building a Smart Portal for Astronomy
NASA Astrophysics Data System (ADS)
Derriere, S.; Boch, T.
2011-07-01
The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.
Doussoulin Sanhueza, Arlette
2006-01-01
This research was designed to describe the psychomotor development, environmental stimulation, and the socioeconomic condition of preschool children attending three educational institutions in the city of Temuco, Chile. The sample included 81 boys and girls whose age ranged from three to four years. The Test de Desarrollo Psicomotor (The Psychomotor Development Test), or TEPSI, was used to assess psychomotor development; the Home Observation Measurement of the Environment (HOME) Scale was used to evaluate environmental stimulation; and the Socioeconomic Standardization Model was used to categorize children's socioeconomic status. The highest statistical correlation was observed between psychomotor development and environmental stimulation when comparing all three parameters across the sample. Environmental stimulation may be the most relevant parameter in the study of psychomotor development of children. Socioeconomic status alone does not seem to be strongly related to children's psychomotor development in the Temuco region of Chile.
Santos, João Rodrigo; Viegas, Olga; Páscoa, Ricardo N M J; Ferreira, Isabel M P L V O; Rangel, António O S S; Lopes, João Almeida
2016-10-01
In this work, a real-time and in-situ analytical tool based on near infrared spectroscopy is proposed to predict two of the most relevant coffee parameters during the roasting process, sucrose and colour. The methodology was developed taking in consideration different coffee varieties (Arabica and Robusta), coffee origins (Brazil, East-Timor, India and Uganda) and roasting process procedures (slow and fast). All near infrared spectroscopy-based calibrations were developed resorting to partial least squares regression. The results proved the suitability of this methodology as demonstrated by range-error-ratio and coefficient of determination higher than 10 and 0.85 respectively, for all modelled parameters. The relationship between sucrose and colour development during the roasting process is further discussed, in light of designing in real-time coffee products with similar visual appearance and distinct organoleptic profile. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David
2018-04-01
Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.
[Mechanisms and applications of transcutaneous electrical nerve stimulation in analgesia].
Tang, Zheng-Yu; Wang, Hui-Quan; Xia, Xiao-Lei; Tang, Yi; Peng, Wei-Wei; Hu, Li
2017-06-25
Transcutaneous electrical nerve stimulation (TENS), as a non-pharmacological and non-invasive analgesic therapy with low-cost, has been widely used to relieve pain in various clinical applications, by delivering current pulses to the skin area to activate the peripheral nerve fibers. Nevertheless, analgesia induced by TENS varied in the clinical practice, which could be caused by the fact that TENS with different stimulus parameters has different biological mechanisms in relieving pain. Therefore, to advance our understanding of TENS in various basic and clinical studies, we discussed (1) neurophysiological and biochemical mechanisms of TENS-induced analgesia; (2) relevant factors that may influence analgesic effects of TENS from the perspectives of stimulus parameters, including stimulated position, pulse parameters (current intensity, frequency, and pulse width), stimulus duration and used times in each day; and (3) applications of TENS in relieving clinical pain, including post-operative pain, chronic low back pain and labor pain. Finally, we propose that TENS may involve multiple and complex psychological neurophysiological mechanisms, and suggest that different analgesic effects of TENS with different stimulus parameters should be taken into consideration in clinical applications. In addition, to optimize analgesic effect, we recommend that individual-based TENS stimulation parameters should be designed by considering individual differences among patients, e.g., adaptively adjusting the stimulation parameters based on the dynamic ratings of patients' pain.
Ahmed, Safia K.; Ward, John P.; Liu, Yang
2017-01-01
Magnesium (Mg) is becoming increasingly popular for orthopaedic implant materials. Its mechanical properties are closer to bone than other implant materials, allowing for more natural healing under stresses experienced during recovery. Being biodegradable, it also eliminates the requirement of further surgery to remove the hardware. However, Mg rapidly corrodes in clinically relevant aqueous environments, compromising its use. This problem can be addressed by alloying the Mg, but challenges remain at optimising the properties of the material for clinical use. In this paper, we present a mathematical model to provide a systematic means of quantitatively predicting Mg corrosion in aqueous environments, providing a means of informing standardisation of in vitro investigation of Mg alloy corrosion to determine implant design parameters. The model describes corrosion through reactions with water, to produce magnesium hydroxide Mg(OH)2, and subsequently with carbon dioxide to form magnesium carbonate MgCO3. The corrosion products produce distinct protective layers around the magnesium block that are modelled as porous media. The resulting model of advection–diffusion equations with multiple moving boundaries was solved numerically using asymptotic expansions to deal with singular cases. The model has few free parameters, and it is shown that these can be tuned to predict a full range of corrosion rates, reflecting differences between pure magnesium or magnesium alloys. Data from practicable in vitro experiments can be used to calibrate the model’s free parameters, from which model simulations using in vivo relevant geometries provide a cheap first step in optimising Mg-based implant materials. PMID:29267244
The Role of Transfer in Designing Games and Simulations for Health: Systematic Review
Terlouw, Gijs; Wartena, Bard O; van 't Veer, Job TB; Prins, Jelle T; Pierie, Jean Pierre EN
2017-01-01
Background The usefulness and importance of serious games and simulations in learning and behavior change for health and health-related issues are widely recognized. Studies have addressed games and simulations as interventions, mostly in comparison with their analog counterparts. Numerous complex design choices have to be made with serious games and simulations for health, including choices that directly contribute to the effects of the intervention. One of these decisions is the way an intervention is expected to lead to desirable transfer effects. Most designs adopt a first-class transfer rationale, whereas the second class of transfer types seems a rarity in serious games and simulations for health. Objective This study sought to review the literature specifically on the second class of transfer types in the design of serious games and simulations. Focusing on game-like interventions for health and health care, this study aimed to (1) determine whether the second class of transfer is recognized as a road for transfer in game-like interventions, (2) review the application of the second class of transfer type in designing game-like interventions, and (3) assess studies that include second-class transfer types reporting transfer outcomes. Methods A total of 6 Web-based databases were systematically searched by titles, abstracts, and keywords using the search strategy (video games OR game OR games OR gaming OR computer simulation*) AND (software design OR design) AND (fidelity OR fidelities OR transfer* OR behaviour OR behavior). The databases searched were identified as relevant to health, education, and social science. Results A total of 15 relevant studies were included, covering a range of game-like interventions, all more or less mentioning design parameters aimed at transfer. We found 9 studies where first-class transfer was part of the design of the intervention. In total, 8 studies dealt with transfer concepts and fidelity types in game-like intervention design in general; 3 studies dealt with the concept of second-class transfer types and reported effects, and 2 of those recognized transfer as a design parameter. Conclusions In studies on game-like interventions for health and health care, transfer is regarded as a desirable effect but not as a basic principle for design. None of the studies determined the second class of transfer or instances thereof, although in 3 cases a nonliteral transfer type was present. We also found that studies on game-like interventions for health do not elucidate design choices made and rarely provide design principles for future work. Games and simulations for health abundantly build upon the principles of first-class transfer, but the adoption of second-class transfer types proves scarce. It is likely to be worthwhile to explore the possibilities of second-class transfer types, as they may considerably influence educational objectives in terms of future serious game design for health. PMID:29175812
Effect of Macrogeometry on the Surface Topography of Dental Implants.
Naves, Marina Melo; Menezes, Helder Henrique Machado; Magalhães, Denildo; Ferreira, Jessica Afonso; Ribeiro, Sara Ferreira; de Mello, José Daniel Biasoli; Costa, Henara Lillian
2015-01-01
Because the microtopography of titanium implants influences the biomaterial-tissue interaction, surface microtexturing treatments are frequently used for dental implants. However, surface treatment alone may not determine the final microtopography of a dental implant, which can also be influenced by the implant macrogeometry. This work analyzed the effects on surface roughness parameters of the same treatment applied by the same manufacturer to implants with differing macro-designs. Three groups of titanium implants with different macro-designs were investigated using laser interferometry and scanning electron microscopy. Relevant surface roughness parameters were calculated for different regions of each implant. Two flat disks (treated and untreated) were also investigated for comparison. The tops of the threads and the nonthreaded regions of all implants had very similar roughness parameters, independent of the geometry of the implant, which were also very similar to those of flat disks treated with the same process. In contrast, the flanks and valleys of the threads presented larger irregularities (Sa) with higher slopes (Sdq) and larger developed surface areas (Sdr) on all implants, particularly for implants with threads with smaller heights. The flanks and valleys displayed stronger textures (Str), particularly on the implants with threads with larger internal angles. Parameters associated with the height of the irregularities (Sa), the slope of the asperities (Sdq), the presence of a surface texture (Str), and the developed surface area of the irregularities (Sdr) were significantly affected by the macrogeometry of the implants. Flat disks subjected to the same surface treatment as dental implants reproduced only the surface topography of the flat regions of the implants.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
NASA Astrophysics Data System (ADS)
Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Paramagul, Chintana; Helvie, Mark; Nees, Alexis V.
2012-03-01
We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar). A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100 images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86+/-0.03 and 0.87+/-0.03 on the training set and 0.91+/-0.03 and 0.90+/-0.03 on the test set, respectively. This study demonstrated that RRF improved the similarity of the retrieved masses.
NASA Astrophysics Data System (ADS)
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua
2017-03-01
As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; Aitharaju, Venkat
Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it failsmore » to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.« less
Modulate chopper technique used in pyroelectric uncooled focal plane array thermal imager
NASA Astrophysics Data System (ADS)
He, Yuqing; Jin, Weiqi; Liu, Guangrong; Gao, Zhiyun; Wang, Xia; Wang, Lingxue
2002-09-01
Pyroelectric uncooled focal plane array (FPA) thermal imager has the advantages of low cost, small size, high responsibility and can work under room temperature, so it has great progress in recent years. As a matched technique, the modulate chopper has become one of the key techniques in uncooled FPA thermal imaging system. Now the Archimedes spiral cord chopper technique is mostly used. When it works, the chopper pushing scans the detector's pixel array, thus makes the pixels being exposed continuously. This paper simulates the shape of this kind of chopper, analyses the exposure time of the detector's every pixel, and also analyses the whole detector pixels' exposure sequence. From the analysis we can get the results: the parameter of Archimedes spiral cord, the detector's thermal time constant, the detector's geometrical dimension, the relative position of the detector to the chopper's spiral cord are the system's important parameters, they will affect the chopper's exposure efficiency and uniformity. We should design the chopper's relevant parameter according to the practical request to achieve the chopper's appropriate structure.
NASA Astrophysics Data System (ADS)
González-Correa, David; Osorio-Gómez, Gilberto; Mejía-Gutiérrez, Ricardo
2016-09-01
Concentrating Photo Voltaic (CPV) systems maximize energy harvested from the sun with multi-junction solar cells of less area, reducing related implementation costs and reaching energy production thresholds up to 38,9 %. Nowadays, CPV systems are generally implemented in solar energy farms in a permanent location, however, these systems could be used in other dynamic contexts, such as vehicles or portable devices. In this way, mechanical and geometrical parameters related to manipulation, transportation and installation should be carefully considered at the design stage. Besides, each condition of use presents different variables affecting these parameters. In all, there is not an established architecture for these systems, opening up the possibility of radically changing their use, geometry and components. Therefore, a concept of a methodical process for designing of CPV systems is proposed in order to predict their behavior in terms of implementation and energy production. This might allow the development of robust concepts that can be adapted to different context of use as required, providing an itinerant character and thus extending the field of implementation of these systems beyond a static use. The relevant variables for the use of CPV systems are determined through experimentation considering the implementation of Fresnel lenses as light concentrators. This allows generating a structured design guide composed of different methods of measurement, selection and development. The methodical process is based on a perspective of functional modules considering needs, technical aspects and particular usage conditions of each design and it would provide appropriate guidelines in each circumstance.
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
[Design of computerised database for clinical and basic management of uveal melanoma].
Bande Rodríguez, M F; Santiago Varela, M; Blanco Teijeiro, M J; Mera Yañez, P; Pardo Perez, M; Capeans Tome, C; Piñeiro Ces, A
2012-09-01
The uveal melanoma is the most common primary intraocular tumour in adults. The objective of this work is to show how a computerised database has been formed with specific applications, for clinical and research use, to an extensive group of patients diagnosed with uveal melanoma. For the design of the database a selection of categories, attributes and values was created based on the classifications and parameters given by various authors of articles which have had great relevance in the field of uveal melanoma in recent years. The database has over 250 patient entries with specific information on their clinical history, diagnosis, treatment and progress. It enables us to search any parameter of the entry and make quick and simple statistical studies of them. The database models have been transformed into a basic tool for clinical practice, as they are an efficient way of storing, compiling and selective searching of information. When creating a database it is very important to define a common strategy and the use of a standard language. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.
Deriving the Generalized Power and Efficiency Equations for Jet Propulsion Systems
NASA Astrophysics Data System (ADS)
Lee, Hsing-Juin; Chang, Chih-Luong
The kinetic power and efficiency equations for general jet propulsion systems are classically given in a much cursory, incomplete, and ununified format. This situation prohibits the propulsion designer from seeing the panorama of interrelated propulsion parameters and effects. And in some cases, it may lead to an energy-inefficient propulsion system design, or induce significant offset in propulsion performance as demonstrated in this study. Thus, herein we attempt to clarify some related concepts and to rigorously derive the associated generalized equations with a complete spectrum of physical parameters to be manipulated in quest of better performance. By a highly efficient interweaved transport scheme, we have derived the following equations for general jet propulsion systems: i.e., generalized total kinetic power, generalized kinetic power delivered to the jet propulsion system, generalized thrust power, generalized available propulsion power, and relevant generalized propulsive, thermal, and overall efficiency equations. Further, the variants of these equations under special conditions are also considered. For taking advantage of the above propulsion theories, we also illustrate some novel propulsion strategies in the final discussion, such as the dive-before-climb launch of rocket from highland mountain on eastbound rail, with perhaps minisatellites as the payloads.
Wang, Dan Dan; Huang, Guo Fu; He, Ming Guang; Wu, Ling Ling; Lin, Shan
2011-03-01
To summarize the design and methodology of a multi-center study. With the existed ethnic differences of glaucoma, this survey will explore the differences with regard to anterior and posterior ocular segment parameters between Caucasians and Chinese. In this study, four cohorts including American Caucasians and American Chinese from San Francisco, southern mainland Chinese from Guangzhou, and northern mainland Chinese from Beijing were prospectively enrolled for a series of eye examinations and tests from May 2008 to December 2010. A total of 120 subjects including 15 of each gender in each age decade from 40s to 70s were recruited for each group. Data of the following tests were collected: a questionnaire eliciting systemic and ocular disease history, blood pressure, presenting and best corrected visual acuity, auto-refraction, Goldmann applanation tonometry, gonioscopy, A-scan, anterior segment optical coherence tomography (ASOCT), ultrasound biomicroscopy (UBM), visual field (VF), Heidelberg retinal tomography (HRT), OCT for optic nerve, and digital fundus photography. this study will provide insights to the etiologies of glaucoma especially PACG through inter-ethnic comparisons of relevant ocular anatomic and functional parameters.
Millimeter-scale MEMS enabled autonomous systems: system feasibility and mobility
NASA Astrophysics Data System (ADS)
Pulskamp, Jeffrey S.
2012-06-01
Millimeter-scale robotic systems based on highly integrated microelectronics and micro-electromechanical systems (MEMS) could offer unique benefits and attributes for small-scale autonomous systems. This extreme scale for robotics will naturally constrain the realizable system capabilities significantly. This paper assesses the feasibility of developing such systems by defining the fundamental design trade spaces between component design variables and system level performance parameters. This permits the development of mobility enabling component technologies within a system relevant context. Feasible ranges of system mass, required aerodynamic power, available battery power, load supported power, flight endurance, and required leg load bearing capability are presented for millimeter-scale platforms. The analysis illustrates the feasibility of developing both flight capable and ground mobile millimeter-scale autonomous systems while highlighting the significant challenges that must be overcome to realize their potential.
Potential of mechanical metamaterials to induce their own global rotational motion
NASA Astrophysics Data System (ADS)
Dudek, K. K.; Wojciechowski, K. W.; Dudek, M. R.; Gatt, R.; Mizzi, L.; Grima, J. N.
2018-05-01
The potential of several classes of mechanical metamaterials to induce their own overall rotational motion through the individual rotation of their subunits is examined. Using a theoretical approach, we confirm that for various rotating rigid unit systems, if by design the sum of angular momentum of subunits rotating in different directions is made to be unequal, then the system will experience an overall rotation, the extent of which may be controlled through careful choice of the geometric parameters defining these systems. This phenomenon of self-induced rotation is also confirmed experimentally. Furthermore, we discuss how these systems can be designed in a special way so as to permit extended rotations which allows them to overcome geometric lockage and the relevance of this concept in applications ranging from satellites to spacecraft and telescopes employed in space.
Determination of power and moment on shaft of special asynchronous electric drives
NASA Astrophysics Data System (ADS)
Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.
2018-03-01
In the article, questions and tasks of determination of power and the moment on a shaft of special asynchronous electric drives are considered. Use of special asynchronous electric drives in mechanical engineering and other industries is relevant. The considered types of electric drives possess the improved mass-dimensional indicators in comparison with singleengine systems. Also these types of electric drives have constructive advantages; the improved characteristics allow one to realize the technological process. But creation and design of new electric drives demands adjustment of existing or development of new methods and approaches of calculation of parameters. Determination of power and the moment on a shaft of special asynchronous electric drives is the main objective during design of electric drives. This task has been solved based on a method of electromechanical transformation of energy.
Non-destructive sampling of a comet
NASA Astrophysics Data System (ADS)
Jessberger, H. L.; Kotthaus, M.
1991-04-01
Various conditions which must be met for the development of a nondestructive sampling and acquisition system are outlined and the development of a new robotic sampling system suited for use on a cometary surface is briefly discussed. The Rosetta mission of ESA will take samples of a comet nucleus and return both core and volatile samples to earth. Various considerations which must be taken into account for such a project are examined including the identification of design parameters for sample quality; the identification of the most probable site conditions; the development of a sample acquisition system with respect to these conditions; the production of model materials and model conditions; and the investigation of the relevant material properties. An adequate sampling system should also be designed and built, including various tools, and the system should be tested under simulated cometary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killoran, N.; Huelga, S. F.; Plenio, M. B.
Recent evidence suggests that quantum effects may have functional importance in biological light-harvesting systems. Along with delocalized electronic excitations, it is now suspected that quantum coherent interactions with certain near-resonant vibrations may contribute to light-harvesting performance. However, the actual quantum advantage offered by such coherent vibrational interactions has not yet been established. We investigate a quantum design principle, whereby coherent exchange of single energy quanta between electronic and vibrational degrees of freedom can enhance a light-harvesting system’s power above what is possible by thermal mechanisms alone. We present a prototype quantum heat engine which cleanly illustrates this quantum design principlemore » and quantifies its quantum advantage using thermodynamic measures of performance. We also demonstrate the principle’s relevance in parameter regimes connected to natural light-harvesting structures.« less
Developmental immunotoxicity of chemicals in rodents and its possible regulatory impact.
Hessel, Ellen V S; Tonk, Elisa C M; Bos, Peter M J; van Loveren, Henk; Piersma, Aldert H
2015-01-01
Around 25% of the children in developed countries are affected with immune-based diseases. Juvenile onset diseases such as allergic, inflammatory and autoimmune diseases have shown increasing prevalences in the last decades. The role of chemical exposures in these phenomena is unclear. It is thought that the developmental immune system is more susceptible to toxicants than the mature situation. Developmental immunotoxicity (DIT) testing is nowadays not or minimally included in regulatory toxicology requirements. We reviewed whether developmental immune parameters in rodents would provide relatively sensitive endpoints of toxicity, whose inclusion in regulatory toxicity testing might improve hazard identification and risk assessment of chemicals. For each of the nine reviewed toxicants, the developing immune system was found to be at least as sensitive or more sensitive than the general (developmental) toxicity parameters. Functional immune (antigen-challenged) parameters appear more affected than structural (non-challenged) immune parameters. Especially, antibody responses to immune challenges with keyhole limpet hemocyanine or sheep red blood cells and delayed-type hypersensitivity responses appear to provide sensitive parameters of developmental immune toxicity. Comparison with current tolerable daily intakes (TDI) and their underlying overall no observed adverse effect levels showed that for some of the compounds reviewed, the TDI may need reconsideration based on developmental immune parameters. From these data, it can be concluded that the developing immune system is very sensitive to the disruption of toxicants independent of study design. Consideration of including functional DIT parameters in current hazard identification guidelines and wider application of relevant study protocols is warranted.
A protocol for better design, application, and communication of population viability analyses.
Pe'er, Guy; Matsinos, Yiannis G; Johst, Karin; Franz, Kamila W; Turlure, Camille; Radchuk, Viktoriia; Malinowska, Agnieszka H; Curtis, Janelle M R; Naujokaitis-Lewis, Ilona; Wintle, Brendan A; Henle, Klaus
2013-08-01
Population viability analyses (PVAs) contribute to conservation theory, policy, and management. Most PVAs focus on single species within a given landscape and address a specific problem. This specificity often is reflected in the organization of published PVA descriptions. Many lack structure, making them difficult to understand, assess, repeat, or use for drawing generalizations across PVA studies. In an assessment comparing published PVAs and existing guidelines, we found that model selection was rarely justified; important parameters remained neglected or their implementation was described vaguely; limited details were given on parameter ranges, sensitivity analysis, and scenarios; and results were often reported too inconsistently to enable repeatability and comparability. Although many guidelines exist on how to design and implement reliable PVAs and standards exist for documenting and communicating ecological models in general, there is a lack of organized guidelines for designing, applying, and communicating PVAs that account for their diversity of structures and contents. To fill this gap, we integrated published guidelines and recommendations for PVA design and application, protocols for documenting ecological models in general and individual-based models in particular, and our collective experience in developing, applying, and reviewing PVAs. We devised a comprehensive protocol for the design, application, and communication of PVAs (DAC-PVA), which has 3 primary elements. The first defines what a useful PVA is; the second element provides a workflow for the design and application of a useful PVA and highlights important aspects that need to be considered during these processes; and the third element focuses on communication of PVAs to ensure clarity, comprehensiveness, repeatability, and comparability. Thereby, DAC-PVA should strengthen the credibility and relevance of PVAs for policy and management, and improve the capacity to generalize PVA findings across studies. © 2013 Society for Conservation Biology.
Review of simulation techniques for Aquifer Thermal Energy Storage (ATES)
NASA Astrophysics Data System (ADS)
Mercer, J. W.; Faust, C. R.; Miller, W. J.; Pearson, F. J., Jr.
1981-03-01
The analysis of aquifer thermal energy storage (ATES) systems rely on the results from mathematical and geochemical models. Therefore, the state-of-the-art models relevant to ATES were reviewed and evaluated. These models describe important processes active in ATES including ground-water flow, heat transport (heat flow), solute transport (movement of contaminants), and geochemical reactions. In general, available models of the saturated ground-water environment are adequate to address most concerns associated with ATES; that is, design, operation, and environmental assessment. In those cases where models are not adequate, development should be preceded by efforts to identify significant physical phenomena and relate model parameters to measurable quantities.
Optimization techniques applied to spectrum management for communications satellites
NASA Astrophysics Data System (ADS)
Ottey, H. R.; Sullivan, T. M.; Zusman, F. S.
This paper describes user requirements, algorithms and software design features for the application of optimization techniques to the management of the geostationary orbit/spectrum resource. Relevant problems include parameter sensitivity analyses, frequency and orbit position assignment coordination, and orbit position allotment planning. It is shown how integer and nonlinear programming as well as heuristic search techniques can be used to solve these problems. Formalized mathematical objective functions that define the problems are presented. Constraint functions that impart the necessary solution bounds are described. A versatile program structure is outlined, which would allow problems to be solved in stages while varying the problem space, solution resolution, objective function and constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmilehto, J.; Deppe, F.; Di Ventra, M.
Memristors are resistive elements retaining information of their past dynamics. They have garnered substantial interest due to their potential for representing a paradigm change in electronics, information processing and unconventional computing. Given the advent of quantum technologies, a design for a quantum memristor with superconducting circuits may be envisaged. Along these lines, we introduce such a quantum device whose memristive behavior arises from quasiparticle-induced tunneling when supercurrents are cancelled. Here in this paper, for realistic parameters, we find that the relevant hysteretic behavior may be observed using current state-of-the-art measurements of the phase-driven tunneling current. Finally, we develop suitable methodsmore » to quantify memory retention in the system.« less
Quantum Memristors with Superconducting Circuits
Salmilehto, J.; Deppe, F.; Di Ventra, M.; Sanz, M.; Solano, E.
2017-01-01
Memristors are resistive elements retaining information of their past dynamics. They have garnered substantial interest due to their potential for representing a paradigm change in electronics, information processing and unconventional computing. Given the advent of quantum technologies, a design for a quantum memristor with superconducting circuits may be envisaged. Along these lines, we introduce such a quantum device whose memristive behavior arises from quasiparticle-induced tunneling when supercurrents are cancelled. For realistic parameters, we find that the relevant hysteretic behavior may be observed using current state-of-the-art measurements of the phase-driven tunneling current. Finally, we develop suitable methods to quantify memory retention in the system. PMID:28195193
Enzyme reactor design under thermal inactivation.
Illanes, Andrés; Wilson, Lorena
2003-01-01
Temperature is a very relevant variable for any bioprocess. Temperature optimization of bioreactor operation is a key aspect for process economics. This is especially true for enzyme-catalyzed processes, because enzymes are complex, unstable catalysts whose technological potential relies on their operational stability. Enzyme reactor design is presented with a special emphasis on the effect of thermal inactivation. Enzyme thermal inactivation is a very complex process from a mechanistic point of view. However, for the purpose of enzyme reactor design, it has been oversimplified frequently, considering one-stage first-order kinetics of inactivation and data gathered under nonreactive conditions that poorly represent the actual conditions within the reactor. More complex mechanisms are frequent, especially in the case of immobilized enzymes, and most important is the effect of catalytic modulators (substrates and products) on enzyme stability under operation conditions. This review focuses primarily on reactor design and operation under modulated thermal inactivation. It also presents a scheme for bioreactor temperature optimization, based on validated temperature-explicit functions for all the kinetic and inactivation parameters involved. More conventional enzyme reactor design is presented merely as a background for the purpose of highlighting the need for a deeper insight into enzyme inactivation for proper bioreactor design.
The effects of leading edge and downstream film cooling on turbine vane heat transfer
NASA Astrophysics Data System (ADS)
Hylton, L. D.; Nirmalan, V.; Sultanian, B. K.; Kaufman, R. M.
1988-11-01
The progress under contract NAS3-24619 toward the goal of establishing a relevant data base for use in improving the predictive design capabilities for external heat transfer to turbine vanes, including the effect of downstream film cooling with and without leading edge showerhead film cooling. Experimental measurements were made in a two-dimensional cascade previously used to obtain vane surface heat transfer distributions on nonfilm cooled airfoils under contract NAS3-22761 and leading edge showerhead film cooled airfoils under contract NAS3-23695. The principal independent parameters (Mach number, Reynolds number, turbulence, wall-to-gas temperature ratio, coolant-to-gas temperature ratio, and coolant-to-gas pressure ratio) were maintained over ranges consistent with actual engine conditions and the test matrix was structured to provide an assessment of the independent influence of parameters of interest, namely, exit Mach number, exit Reynolds number, coolant-to-gas temperature ratio, and coolant-to-gas pressure ratio. Data provide a data base for downstream film cooled turbine vanes and extends the data bases generated in the two previous studies. The vane external heat transfer obtained indicate that considerable cooling benefits can be achieved by utilizing downstream film cooling. The data obtained and presented illustrate the interaction of the variables and should provide the airfoil designer and computational analyst the information required to improve heat transfer design capabilities for film cooled turbine airfoils.
The effects of leading edge and downstream film cooling on turbine vane heat transfer
NASA Technical Reports Server (NTRS)
Hylton, L. D.; Nirmalan, V.; Sultanian, B. K.; Kaufman, R. M.
1988-01-01
The progress under contract NAS3-24619 toward the goal of establishing a relevant data base for use in improving the predictive design capabilities for external heat transfer to turbine vanes, including the effect of downstream film cooling with and without leading edge showerhead film cooling. Experimental measurements were made in a two-dimensional cascade previously used to obtain vane surface heat transfer distributions on nonfilm cooled airfoils under contract NAS3-22761 and leading edge showerhead film cooled airfoils under contract NAS3-23695. The principal independent parameters (Mach number, Reynolds number, turbulence, wall-to-gas temperature ratio, coolant-to-gas temperature ratio, and coolant-to-gas pressure ratio) were maintained over ranges consistent with actual engine conditions and the test matrix was structured to provide an assessment of the independent influence of parameters of interest, namely, exit Mach number, exit Reynolds number, coolant-to-gas temperature ratio, and coolant-to-gas pressure ratio. Data provide a data base for downstream film cooled turbine vanes and extends the data bases generated in the two previous studies. The vane external heat transfer obtained indicate that considerable cooling benefits can be achieved by utilizing downstream film cooling. The data obtained and presented illustrate the interaction of the variables and should provide the airfoil designer and computational analyst the information required to improve heat transfer design capabilities for film cooled turbine airfoils.
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
Micoulaud-Franchi, J-A; McGonigal, A; Lopez, R; Daudet, C; Kotwas, I; Bartolomei, F
2015-12-01
The technique of electroencephalographic neurofeedback (EEG NF) emerged in the 1970s and is a technique that measures a subject's EEG signal, processes it in real time, extracts a parameter of interest and presents this information in visual or auditory form. The goal is to effectuate a behavioural modification by modulating brain activity. The EEG NF opens new therapeutic possibilities in the fields of psychiatry and neurology. However, the development of EEG NF in clinical practice requires (i) a good level of evidence of therapeutic efficacy of this technique, (ii) a good practice guide for this technique. Firstly, this article investigates selected trials with the following criteria: study design with controlled, randomized, and open or blind protocol, primary endpoint related to the mental and brain disorders treated and assessed with standardized measurement tools, identifiable EEG neurophysiological targets, underpinned by pathophysiological relevance. Trials were found for: epilepsies, migraine, stroke, chronic insomnia, attentional-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, major depressive disorder, anxiety disorders, addictive disorders, psychotic disorders. Secondly, this article investigates the principles of neurofeedback therapy in line with learning theory. Different underlying therapeutic models are presented didactically between two continua: a continuum between implicit and explicit learning and a continuum between the biomedical model (centred on "the disease") and integrative biopsychosocial model of health (centred on "the illness"). The main relevant learning model is to link neurofeedback therapy with the field of cognitive remediation techniques. The methodological specificity of neurofeedback is to be guided by biologically relevant neurophysiological parameters. Guidelines for good clinical practice of EEG NF concerning technical issues of electrophysiology and of learning are suggested. These require validation by institutional structures for the clinical practice of EEG NF. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Young, Jared W.; Markou, Athina
2015-01-01
Amotivation and reward-processing deficits have long been described in patients with schizophrenia and considered large contributors to patients’ inability to integrate well in society. No effective treatments exist for these symptoms, partly because the neuromechanisms mediating such symptoms are poorly understood. Here, we propose a translational neuroscientific approach that can be used to assess reward/motivational deficits related to the negative symptoms of schizophrenia using behavioral paradigms that can also be conducted in experimental animals. By designing and using objective laboratory behavioral tools that are parallel in their parameters in rodents and humans, the neuromechanisms underlying behaviors with relevance to these symptoms of schizophrenia can be investigated. We describe tasks that measure the motivation of rodents to expend physical and cognitive effort to gain rewards, as well as probabilistic learning tasks that assess both reward learning and feedback-based decision making. The latter tasks are relevant because of demonstrated links of performance deficits correlating with negative symptoms in patients with schizophrenia. These tasks utilize operant techniques in order to investigate neural circuits targeting a specific domain across species. These tasks therefore enable the development of insights into altered mechanisms leading to negative symptom-relevant behaviors in patients with schizophrenia. Such findings will then enable the development of targeted treatments for these altered neuromechanisms and behaviors seen in schizophrenia. PMID:26194891
Comprehensive analysis of line-edge and line-width roughness for EUV lithography
NASA Astrophysics Data System (ADS)
Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai
2017-03-01
Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
The Sensitivity of Precooled Air-Breathing Engine Performance to Heat Exchanger Design Parameters
NASA Astrophysics Data System (ADS)
Webber, H.; Bond, A.; Hempsell, M.
The issues relevant to propulsion design for Single Stage To Orbit (SSTO) vehicles are considered. In particular two air- breathing engine concepts involving precooling are compared; SABRE (Synergetic Air-Breathing and Rocket Engine) as designed for the Skylon SSTO launch vehicle, and a LACE (Liquid Air Cycle Engine) considered in the 1960's by the Americans for an early generation spaceplane. It is shown that through entropy minimisation the SABRE has made substantial gains in performance over the traditional LACE precooled engine concept, and has shown itself as the basis of a viable means of realising a SSTO vehicle. Further, it is demonstrated that the precooler is a major source of thermodynamic irreversibility within the engine cycle and that further reduction in entropy can be realised by increasing the heat transfer coefficient on the air side of the precooler. If this were to be achieved, it would improve the payload mass delivered to orbit by the Skylon launch vehicle by between 5 and 10%.
Simulations of Foils Irradiated by Finite Laser Spots
NASA Astrophysics Data System (ADS)
Phillips, Lee
2006-10-01
Recent proposed designs (Obenchain et al., Phys. Plasmas 13 056320 (2006)) for direct-drive ICF targets for energy applications involve high implosion velocities with lower laser energies combined with higher irradiances. The use of high irradiances increases the likelihood of deleterious laser plasma instabilities (LPI) that may lead, for example, to the generation of fast electrons. The proposed use of a 248 nm KrF laser is expected to minimize LPI, and this is being studied by experiments on NRL's NIKE laser. Here we report on simulations aimed at designing and interpreting these experiments. The 2d simulations employ a modification of the FAST code to ablate plasma from CH and DT foils using laser pulses with arbitrary spatial and temporal profiles. These include the customary hypergaussian NIKE profile, gaussian profiles, and combinations of these. The simulations model the structure of the ablating plasma and the absorption of the laser light, providing parameters for design of the experiment and indicating where the relevant LPI (two-plasmon, Raman) may be observed.
Computational Analysis of the G-III Laminar Flow Glove
NASA Technical Reports Server (NTRS)
Malik, Mujeeb R.; Liao, Wei; Lee-Rausch, Elizabeth M.; Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan
2011-01-01
Under NASA's Environmentally Responsible Aviation Project, flight experiments are planned with the primary objective of demonstrating the Discrete Roughness Elements (DRE) technology for passive laminar flow control at chord Reynolds numbers relevant to transport aircraft. In this paper, we present a preliminary computational assessment of the Gulfstream-III (G-III) aircraft wing-glove designed to attain natural laminar flow for the leading-edge sweep angle of 34.6deg. Analysis for a flight Mach number of 0.75 shows that it should be possible to achieve natural laminar flow for twice the transition Reynolds number ever achieved at this sweep angle. However, the wing-glove needs to be redesigned to effectively demonstrate passive laminar flow control using DREs. As a by-product of the computational assessment, effect of surface curvature on stationary crossflow disturbances is found to be strongly stabilizing for the current design, and it is suggested that convex surface curvature could be used as a control parameter for natural laminar flow design, provided transition occurs via stationary crossflow disturbances.
[Clinical relevance of periodic limb movements during sleep in obstructive sleep apnea patients].
Iriarte, J; Alegre, M; Irimia, P; Urriza, J; Artieda, J
The periodic limb movements disorder (PLMD) is frequently associated with the obstructive sleep apnea syndrome (OSAS), but the prevalence and clinical relevance of this association have not been studied in detail. The objectives were to make a prospective study on the prevalence of PLMD in patients with OSAS, and correlate this association with clinical and respiratory parameters. Forty-two patients diagnosed with OSAS, without clinical suspicion of PLMD, underwent a polysomnographic study. Clinical symptoms and signs were evaluated with an structured questionnaire, and respiratory parameters were obtained from the nocturnal study. Periodic limb movements were found in 10 patients (24%). There were no differences in clinical parameters between both groups (with and without periodical limb movements). However, respiratory parameters were significantly worse in patients without PLMD. PLMD is very frequent in patients with OSAS, and can contribute to worsen clinical signs and symptoms in these patients independently from respiratory parameters.
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
NASA Astrophysics Data System (ADS)
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
Simulating Astrophysical Jets with Inertial Confinement Fusion Machines
NASA Astrophysics Data System (ADS)
Blue, Brent
2005-10-01
Large-scale directional outflows of supersonic plasma, also known as `jets', are ubiquitous phenomena in astrophysics. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Modeling Supernova Shocks with Intense Lasers.
NASA Astrophysics Data System (ADS)
Blue, Brent
2006-04-01
Large-scale directional outflows of supersonic plasma are ubiquitous phenomena in astrophysics, with specific application to supernovae. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Characterization of Piezoelectric Actuators for Flow Control over a Wing
NASA Technical Reports Server (NTRS)
Mossi, Karla M.; Bryant, Robert G.
2004-01-01
During the past decade, piezoelectric actuators as the active element in synthetic jets demonstrated that they could significantly enhance the overall lift on an airfoil. However, durability, system weight, size, and power have limited their use outside a laboratory. These problems are not trivial, since piezoelectric actuators are physically brittle and display limited displacement. The objective of this study is to characterize the relevant properties for the design of a synthetic jet utilizing three types of piezoelectric actuators as mechanical diaphragms, Radial Field Diaphragms, Thunders, and Bimorphs so that the shape cavity volume does not exceed 147.5 cubic centimeters on a 7centimeter x 7centimeter aerial coverage. These piezoelectric elements were selected because of their geometry, and overall free-displacement. Each actuator was affixed about its perimeter in a cavity, and relevant parameters such as clamped displacement variations with voltage and frequency, air velocities produced through an aperture, and sound pressure levels produced by the piezoelectric diaphragms were measured.
Charge transport in organic semiconductors.
Bässler, Heinz; Köhler, Anna
2012-01-01
Modern optoelectronic devices, such as light-emitting diodes, field-effect transistors and organic solar cells require well controlled motion of charges for their efficient operation. The understanding of the processes that determine charge transport is therefore of paramount importance for designing materials with improved structure-property relationships. Before discussing different regimes of charge transport in organic semiconductors, we present a brief introduction into the conceptual framework in which we interpret the relevant photophysical processes. That is, we compare a molecular picture of electronic excitations against the Su-Schrieffer-Heeger semiconductor band model. After a brief description of experimental techniques needed to measure charge mobilities, we then elaborate on the parameters controlling charge transport in technologically relevant materials. Thus, we consider the influences of electronic coupling between molecular units, disorder, polaronic effects and space charge. A particular focus is given to the recent progress made in understanding charge transport on short time scales and short length scales. The mechanism for charge injection is briefly addressed towards the end of this chapter.
The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors
NASA Astrophysics Data System (ADS)
Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.
2017-08-01
The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.
Progress of LMJ-relevant implosions experiments on OMEGA
NASA Astrophysics Data System (ADS)
Casner, A.; Philippe, F.; Tassin, V.; Seytor, P.; Monteil, M.-C.; Gauthier, P.; Park, H. S.; Robey, H.; Ross, J.; Amendt, P.; Girard, F.; Villette, B.; Reverdin, C.; Loiseau, P.; Caillaud, T.; Landoas, O.; Li, C. K.; Petrasso, R.; Seguin, F.; Rosenberg, M.; Renaudin, P.
2013-11-01
In preparation of the first ignition attempts on the Laser Mégajoule (LMJ), an experimental program is being pursued on OMEGA to investigate LMJ-relevant hohlraums. First, radiation temperature levels close to 300 eV were recently achieved in reduced-scale hohlraums with modest backscatter losses. Regarding the baseline target design for fusion experiments on LMJ, an extensive experimental database has also been collected for scaled implosions experiments in both empty and gas-filled rugby-shaped hohlraums. We acquired a full picture of hohlraum energetics and implosion dynamics. Not only did the rugby hohlraums show significantly higher x-ray drive energy over the cylindrical hohlraums, but symmetry control by power balance was demonstrated, as well as high-performance D2 implosions enabling the use of a complete suite of neutrons diagnostics. Charged particle diagnostics provide complementary insights into the physics of these x-ray driven implosions. An overview of these results demonstrates our ability to control the key parameters driving the implosion, lending more confidence in extrapolations to ignition-scale targets.
Weak Lensing from Space I: Instrumentation and Survey Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhodes, Jason; Refregier, Alexandre; Massey, Richard
A wide field space-based imaging telescope is necessary to fully exploit the technique of observing dark matter via weak gravitational lensing. This first paper in a three part series outlines the survey strategies and relevant instrumental parameters for such a mission. As a concrete example of hardware design, we consider the proposed Supernova/Acceleration Probe (SNAP). Using SNAP engineering models, we quantify the major contributions to this telescope's Point Spread Function (PSF). These PSF contributions are relevant to any similar wide field space telescope. We further show that the PSF of SNAP or a similar telescope will be smaller than currentmore » ground-based PSFs, and more isotropic and stable over time than the PSF of the Hubble Space Telescope. We outline survey strategies for two different regimes - a ''wide'' 300 square degree survey and a ''deep'' 15 square degree survey that will accomplish various weak lensing goals including statistical studies and dark matter mapping.« less
NASA Astrophysics Data System (ADS)
Bau, Sébastien; Witschger, Olivier; Gensdarmes, François; Thomas, Dominique
2009-05-01
An increasing number of experimental and theoretical studies focus on airborne nanoparticles (NP) in relation with many aspects of risk assessment to move forward our understanding of the hazards, the actual exposures in the workplace, and the limits of engineering controls and personal protective equipment with regard to NP. As a consequence, generating airborne NP with controlled properties constitutes an important challenge. In parallel, toxicological studies have been carried out, and most of them support the concept that surface-area could be a relevant metric for characterizing exposure to airborne NP [1]. To provide NP surface-area concentration measurements, some direct-reading instruments have been designed, based on attachment rate of unipolar ions to NP by diffusion. However, very few information is available concerning the performances of these instruments and the parameters that could affect their responses. In this context, our work aims at characterizing the actual available instruments providing airborne NP surface-area concentration. The instruments (a- LQ1-DC, Matter Engineering; b-AeroTrak™ 9000, TSI; c- NSAM, TSI model 3550;) are thought to be relevant for further workplace exposure characterization and monitoring. To achieve our work, an experimental facility (named CAIMAN) was specially designed, built and characterized.
Battery electric vehicles - implications for the driver interface.
Neumann, Isabel; Krems, Josef F
2016-03-01
The current study examines the human-machine interface of a battery electric vehicle (BEV) from a user-perspective, focussing on the evaluation of BEV-specific displays, the relevance of provided information and challenges for drivers due to the concept of electricity in a road vehicle. A sample of 40 users drove a BEV for 6 months. Data were gathered at three points of data collection. Participants perceived the BEV-specific displays as only moderately reliable and helpful for estimating the displayed parameters. This was even less the case after driving the BEV for 3 months. A taxonomy of user requirements was compiled revealing the need for improved and additional information, especially regarding energy consumption and efficiency. Drivers had difficulty understanding electrical units and the energy consumption of the BEV. On the background of general principles for display design, results provide implications how to display relevant information and how to facilitate drivers' understanding of energy consumption in BEVs. Practitioner Summary: Battery electric vehicle (BEV) displays need to incorporate new information. A taxonomy of user requirements was compiled revealing the need for improved and additional information in the BEV interface. Furthermore, drivers had trouble understanding electrical units and energy consumption; therefore, appropriate assistance is required. Design principles which are specifically important in the BEV context are discussed.
Progress in Fast Ignition Studies with Electrons and Protons
NASA Astrophysics Data System (ADS)
MacKinnon, A. J.; Akli, K. U.; Bartal, T.; Beg, F. N.; Chawla, S.; Chen, C. D.; Chen, H.; Chen, S.; Chowdhury, E.; Fedosejevs, R.; Freeman, R. R.; Hey, D.; Higginson, D.; Key, M. H.; King, J. A.; Link, A.; Ma, T.; MacPhee, A. G.; Offermann, D.; Ovchinnikov, V.; Pasley, J.; Patel, P. K.; Ping, Y.; Schumacher, D. W.; Stephens, R. B.; Tsui, Y. Y.; Wei, M. S.; Van Woerkom, L. D.
2009-09-01
Isochoric heating of inertially confined fusion plasmas by laser driven MeV electrons or protons is an area of great topical interest in the inertial confinement fusion community, particularly with respect to the fast ignition (FI) concept for initiating burn in a fusion capsule. In order to investigate critical aspects needed for a FI point design, experiments were performed to study 1) laser-to-electrons or protons conversion issues and 2) laser-cone interactions including prepulse effects. A large suite of diagnostics was utilized to study these important parameters. Using cone—wire surrogate targets it is found that pre-pulse levels on medium scale lasers such as Titan at Lawrence Livermore National Laboratory produce long scale length plasmas that strongly effect coupling of the laser to FI relevant electrons inside cones. The cone wall thickness also affects coupling to the wire. Conversion efficiency to protons has also been measured and modeled as a function of target thickness, material. Conclusions from the proton and electron source experiments will be presented. Recent advances in modeling electron transport and innovative target designs for reducing igniter energy and increasing gain curves will also be discussed. In conclusion, a program of study will be presented based on understanding the fundamental physics of the electron or proton source relevant to FI.
NASA Astrophysics Data System (ADS)
Weisenburger, A.; Schroer, C.; Jianu, A.; Heinzel, A.; Konys, J.; Steiner, H.; Müller, G.; Fazio, C.; Gessi, A.; Babayan, S.; Kobzova, A.; Martinelli, L.; Ginestar, K.; Balbaud-Célerier, F.; Martín-Muñoz, F. J.; Soler Crespo, L.
2011-08-01
Considering the status of knowledge on corrosion and corrosion protection and especially the need for long term compatibility data of structural materials in HLM a set of experiments to generate reliable long term data was defined and performed. The long term corrosion behaviour of the two structural materials foreseen in ADS, 316L and T91, was investigated in the design relevant temperature field, i.e. from 300 to 550 °C. The operational window of the two steels in this temperature range was identified and all oxidation data were used to develop and validate the models of oxide scale growth in PbBi. A mechanistic model capable to predict the oxidation rate applying some experimentally fitted parameters has been developed. This model assumes parabolic oxidation and might be used for design and safety relevant investigations in future. Studies on corrosion barrier development allowed to define the required Al content for the formation of thin alumina scales in LBE. These results as well as future steps and required improvements are discussed. Variation of experimental conditions clearly showed that specific care has to be taken with respect to local flow conditions and oxygen concentrations.
Feng, Wenhuan; Wang, Hongdong; Zhang, Pengzi; Gao, Caixia; Tao, Junxian; Ge, Zhijuan; Zhu, Dalong; Bi, Yan
2017-07-01
Structural disruption of gut microbiota contributes to the development of non-alcoholic fatty liver disease (NAFLD) and modulating the gut microbiota represents a novel strategy for NAFLD prevention. Although previous studies have demonstrated that curcumin alleviates hepatic steatosis, its effect on the gut microbiota modulation has not been investigated. Next generation sequencing and multivariate analysis were utilized to evaluate the structural changes of gut microbiota in a NAFLD rat model induced by high fat-diet (HFD) feeding. We found that curcumin attenuated hepatic ectopic fat deposition, improved intestinal barrier integrity, and alleviated metabolic endotoxemia in HFD-fed rats. More importantly, curcumin dramatically shifted the overall structure of the HFD-disrupted gut microbiota toward that of lean rats fed a normal diet and altered the gut microbial composition. The abundances of 110 operational taxonomic units (OTUs) were altered by curcumin. Seventy-six altered OTUs were significantly correlated with one or more hepatic steatosis associated parameters and designated 'functionally relevant phylotypes'. Thirty-six of the 47 functionally relevant OTUs that were positively correlated with hepatic steatosis associated parameters were reduced by curcumin. These results indicate that curcumin alleviates hepatic steatosis in part through stain-specific impacts on hepatic steatosis associated phylotypes of gut microbiota in rats. Compounds with antimicrobial activities should be further investigated as novel adjunctive therapies for NAFLD. Copyright © 2017 Elsevier B.V. All rights reserved.
Minois, Nathan; Lauwers-Cances, Valérie; Savy, Stéphanie; Attal, Michel; Andrieu, Sandrine; Anisimov, Vladimir; Savy, Nicolas
2017-10-15
At the design of clinical trial operation, a question of a paramount interest is how long it takes to recruit a given number of patients. Modelling the recruitment dynamics is the necessary step to answer this question. Poisson-gamma model provides very convenient, flexible and realistic approach. This model allows predicting the trial duration using data collected at an interim time with very good accuracy. A natural question arises: how to evaluate the parameters of recruitment model before the trial begins? The question is harder to handle as there are no recruitment data available for this trial. However, if there exist similar completed trials, it is appealing to use data from these trials to investigate feasibility of the recruitment process. In this paper, the authors explore the recruitment data of two similar clinical trials (Intergroupe Francais du Myélome 2005 and 2009). It is shown that the natural idea of plugging the historical rates estimated from the completed trial in the same centres of the new trial for predicting recruitment is not a relevant strategy. In contrast, using the parameters of a gamma distribution of the rates estimated from the completed trial in the recruitment dynamic model of the new trial provides reasonable predictive properties with relevant confidence intervals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Laboratory Modelling of Volcano Plumbing Systems: a review
NASA Astrophysics Data System (ADS)
Galland, Olivier; Holohan, Eoghan P.; van Wyk de Vries, Benjamin; Burchardt, Steffi
2015-04-01
Earth scientists have, since the XIX century, tried to replicate or model geological processes in controlled laboratory experiments. In particular, laboratory modelling has been used study the development of volcanic plumbing systems, which sets the stage for volcanic eruptions. Volcanic plumbing systems involve complex processes that act at length scales of microns to thousands of kilometres and at time scales from milliseconds to billions of years, and laboratory models appear very suitable to address them. This contribution reviews laboratory models dedicated to study the dynamics of volcano plumbing systems (Galland et al., Accepted). The foundation of laboratory models is the choice of relevant model materials, both for rock and magma. We outline a broad range of suitable model materials used in the literature. These materials exhibit very diverse rheological behaviours, so their careful choice is a crucial first step for the proper experiment design. The second step is model scaling, which successively calls upon: (1) the principle of dimensional analysis, and (2) the principle of similarity. The dimensional analysis aims to identify the dimensionless physical parameters that govern the underlying processes. The principle of similarity states that "a laboratory model is equivalent to his geological analogue if the dimensionless parameters identified in the dimensional analysis are identical, even if the values of the governing dimensional parameters differ greatly" (Barenblatt, 2003). The application of these two steps ensures a solid understanding and geological relevance of the laboratory models. In addition, this procedure shows that laboratory models are not designed to exactly mimic a given geological system, but to understand underlying generic processes, either individually or in combination, and to identify or demonstrate physical laws that govern these processes. From this perspective, we review the numerous applications of laboratory models to understand the distinct key features of volcanic plumbing systems: dykes, cone sheets, sills, laccoliths, caldera-related structures, ground deformation, magma/fault interactions, and explosive vents. Barenblatt, G.I., 2003. Scaling. Cambridge University Press, Cambridge. Galland, O., Holohan, E.P., van Wyk de Vries, B., Burchardt, S., Accepted. Laboratory modelling of volcanic plumbing systems: A review, in: Breitkreuz, C., Rocchi, S. (Eds.), Laccoliths, sills and dykes: Physical geology of shallow level magmatic systems. Springer.
A mobile phone based alarm system for supervising vital parameters in free moving rats.
Kellermann, Kristine; Kreuzer, Matthias; Omerovich, Adem; Hoetzinger, Franziska; Kochs, Eberhard F; Jungwirth, Bettina
2012-02-23
Study protocols involving experimental animals often require the monitoring of different parameters not only in anesthetized, but also in free moving animals. Most animal research involves small rodents, in which continuously monitoring parameters such as temperature and heart rate is very stressful for the awake animals or simply not possible. Aim of the underlying study was to monitor heart rate, temperature and activity and to assess inflammation in the heart, lungs, liver and kidney in the early postoperative phase after experimental cardiopulmonary bypass involving 45 min of deep hypothermic circulatory arrest in rats. Besides continuous monitoring of heart rate, temperature and behavioural activity, the main focus was on avoiding uncontrolled death of an animal in the early postoperative phase in order to harvest relevant organs before autolysis would render them unsuitable for the assessment of inflammation. We therefore set up a telemetry-based system (Data Science International, DSI™) that continuously monitored the rat's temperature, heart rate and activity in their cages. The data collection using telemetry was combined with an analysis software (Microsoft excel™), a webmail application (GMX) and a text message-service. Whenever an animal's heart rate dropped below the pre-defined threshold of 150 beats per minute (bpm), a notification in the form of a text message was automatically sent to the experimenter's mobile phone. With a positive predictive value of 93.1% and a negative predictive value of 90.5%, the designed surveillance and alarm system proved a reliable and inexpensive tool to avoid uncontrolled death in order to minimize suffering and harvest relevant organs before autolysis would set in. This combination of a telemetry-based system and software tools provided us with a reliable notification system of imminent death. The system's high positive predictive value helped to avoid uncontrolled death and facilitated timely organ harvesting. Additionally we were able to markedly reduce the drop out rate of experimental animals, and therefore the total number of animals used in our study. This system can be easily adapted to different study designs and prove a helpful tool to relieve stress and more importantly help to reduce animal numbers.
Qualitative criteria and thresholds for low noise asphalt mixture design
NASA Astrophysics Data System (ADS)
Vaitkus, A.; Andriejauskas, T.; Gražulytė, J.; Šernas, O.; Vorobjovas, V.; Kleizienė, R.
2018-05-01
Low noise asphalt pavements are cost efficient and cost effective alternative for road traffic noise mitigation comparing with noise barriers, façade insulation and other known noise mitigation measures. However, design of low noise asphalt mixtures strongly depends on climate and traffic peculiarities of different regions. Severe climate regions face problems related with short durability of low noise asphalt mixtures in terms of considerable negative impact of harsh climate conditions (frost-thaw, large temperature fluctuations, hydrological behaviour, etc.) and traffic (traffic loads, traffic volumes, studded tyres, etc.). Thus there is a need to find balance between mechanical and acoustical durability as well as to ensure adequate pavement skid resistance for road safety purposes. Paper presents analysis of the qualitative criteria and design parameters thresholds of low noise asphalt mixtures. Different asphalt mixture composition materials (grading, aggregate, binder, additives, etc.) and relevant asphalt layer properties (air void content, texture, evenness, degree of compaction, etc.) were investigated and assessed according their suitability for durable and effective low noise pavements. Paper concluded with the overview of requirements, qualitative criteria and thresholds for low noise asphalt mixture design for severe climate regions.
Application of additive laser technologies in the gas turbine blades design process
NASA Astrophysics Data System (ADS)
Shevchenko, I. V.; Rogalev, A. N.; Osipov, S. K.; Bychkov, N. M.; Komarov, I. I.
2017-11-01
An emergence of modern innovative technologies requires delivering new and modernization existing design and production processes. It is especially relevant for designing the high-temperature turbines of gas turbine engines, development of which is characterized by a transition to higher parameters of working medium in order to improve their efficient performance. A design technique for gas turbine blades based on predictive verification of thermal and hydraulic models of their cooling systems by testing of a blade prototype fabricated using the selective laser melting technology was presented in this article. Technique was proven at the time of development of the first stage blade cooling system for the high-pressure turbine. An experimental procedure for verification of a thermal model of the blades with convective cooling systems based on the comparison of heat-flux density obtained from the numerical simulation data and results of tests in a liquid-metal thermostat was developed. The techniques makes it possible to obtain an experimentally tested blade version and to exclude its experimental adjustment after the start of mass production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
F. Cui; F.J. Presuel-Moreno; R.G. Kelly
2005-10-13
The ability of a SS316L surface wetted with a thin electrolyte layer to serve as an effective cathode for an active localized corrosion site was studied computationally. The dependence of the total net cathodic current, I{sub net}, supplied at the repassivation potential E{sub rp} (of the anodic crevice) on relevant physical parameters including water layer thickness (WL), chloride concentration ([Cl{sup -}]) and length of cathode (Lc) were investigated using a three-level, full factorial design. The effects of kinetic parameters including the exchange current density (i{sub o,c}) and Tafel slope ({beta}{sub c}) of oxygen reduction, the anodic passive current density (i{submore » p}) (on the cathodic surface), and E{sub rp} were studied as well using three-level full factorial designs of [Cl{sup -}] and Lc with a fixed WL of 25 {micro}m. The study found that all the three parameters WL, [Cl{sup -}] and Lc as well as the interactions of Lc x WL and Lc x [Cl{sup -}] had significant impact on I{sub net}. A five-factor regression equation was obtained which fits the computation results reasonably well, but demonstrated that interactions are more complicated than can be explained with a simple linear model. Significant effects on I{sub net} were found upon varying either i{sub o,c}, {beta}{sub c}, or E{sub rp}, whereas i{sub p} in the studied range was found to have little impact. It was observed that I{sub net} asymptotically approached maximum values (I{sub max}) when Lc increased to critical minimum values. I{sub max} can be used to determine the stability of coupled localized corrosion and the critical Lc provides important information for experimental design and corrosion protection.« less
Filter design for the detection of compact sources based on the Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.
2005-05-01
This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
NASA Astrophysics Data System (ADS)
Bachmann-Machnik, Anna; Meyer, Daniel; Waldhoff, Axel; Fuchs, Stephan; Dittmer, Ulrich
2018-04-01
Retention Soil Filters (RSFs), a form of vertical flow constructed wetlands specifically designed for combined sewer overflow (CSO) treatment, have proven to be an effective tool to mitigate negative impacts of CSOs on receiving water bodies. Long-term hydrologic simulations are used to predict the emissions from urban drainage systems during planning of stormwater management measures. So far no universally accepted model for RSF simulation exists. When simulating hydraulics and water quality in RSFs, an appropriate level of detail must be chosen for reasonable balancing between model complexity and model handling, considering the model input's level of uncertainty. The most crucial parameters determining the resultant uncertainties of the integrated sewer system and filter bed model were identified by evaluating a virtual drainage system with a Retention Soil Filter for CSO treatment. To determine reasonable parameter ranges for RSF simulations, data of 207 events from six full-scale RSF plants in Germany were analyzed. Data evaluation shows that even though different plants with varying loading and operation modes were examined, a simple model is sufficient to assess relevant suspended solids (SS), chemical oxygen demand (COD) and NH4 emissions from RSFs. Two conceptual RSF models with different degrees of complexity were assessed. These models were developed based on evaluation of data from full scale RSF plants and column experiments. Incorporated model processes are ammonium adsorption in the filter layer and degradation during subsequent dry weather period, filtration of SS and particulate COD (XCOD) to a constant background concentration and removal of solute COD (SCOD) by a constant removal rate during filter passage as well as sedimentation of SS and XCOD in the filter overflow. XCOD, SS and ammonium loads as well as ammonium concentration peaks are discharged primarily via RSF overflow not passing through the filter bed. Uncertainties of the integrated simulation of the sewer system and RSF model mainly originate from the model parameters of the hydrologic sewer system model.
Blue enhanced light sources: opportunities and risks
NASA Astrophysics Data System (ADS)
Lang, Dieter
2012-03-01
Natural daylight is characterized by high proportions of blue light. By proof of a third type of photoreceptor in the human eye which is only sensitive in this spectral region and by subsequent studies it has become obvious that these blue proportions are essential for human health and well being. In various studies beneficial effects of indoor lighting with higher blue spectral proportions have been proven. On the other hand with increasing use of light sources having enhanced blue light for indoor illumination questions are arising about potential health risks attributed to blue light. Especially LED are showing distinct emission characteristics in the blue. Recently the French agency for food, environmental and occupational health & safety ANSES have raised the question on health issues related to LED light sources and have claimed to avoid use of LED for lighting in schools. In this paper parameters which are relevant for potential health risks will be shown and their contribution to risk factors will quantitatively be discussed. It will be shown how to differentiate between photometric parameters for assessment of beneficial as well as hazardous effects. Guidelines will be discussed how blue enhanced light sources can be used in applications to optimally support human health and well being and simultaneously avoid any risks attributed to blue light by a proper design of lighting parameters. In the conclusion it will be shown that no inherent health risks are related to LED lighting with a proper lighting design.
A simulation study of spectral Čerenkov luminescence imaging for tumour margin estimation
NASA Astrophysics Data System (ADS)
Calvert, Nick; Helo, Yusef; Mertzanidou, Thomy; Tuch, David S.; Arridge, Simon R.; Stoyanov, Danail
2017-03-01
Breast cancer is the most common cancer in women in the world. Breast-conserving surgery (BCS) is a standard surgical treatment for breast cancer with the key objective of removing breast tissue, maintaining a negative surgical margin and providing a good cosmetic outcome. A positive surgical margin, meaning the presence of cancerous tissues on the surface of the breast specimen after surgery, is associated with local recurrence after therapy. In this study, we investigate a new imaging modality based on Cerenkov luminescence imaging (CLI) for the purpose of detecting positive surgical margins during BCS. We develop Monte Carlo (MC) simulations using the Geant4 nuclear physics simulation toolbox to study the spectrum of photons emitted given 18F-FDG and breast tissue properties. The resulting simulation spectra show that the CLI signal contains information that may be used to estimate whether the cancerous cells are at a depth of less than 1 mm or greater than 1 mm given appropriate imaging system design and sensitivity. The simulation spectra also show that when the source is located within 1 mm of the surface, the tissue parameters are not relevant to the model as the spectra do not vary significantly. At larger depths, however, the spectral information varies significantly with breast optical parameters, having implications for further studies and system design. While promising, further studies are needed to quantify the CLI response to more accurately incorporate tissue specific parameters and patient specific anatomical details.
(Bio)Sensing Using Nanoparticle Arrays: On the Effect of Analyte Transport on Sensitivity.
Lynn, N Scott; Homola, Jiří
2016-12-20
There has recently been an extensive amount of work regarding the development of optical, electrical, and mechanical (bio)sensors employing planar arrays of surface-bound nanoparticles. The sensor output for these systems is dependent on the rate at which analyte is transported to, and interacts with, each nanoparticle in the array. There has so far been little discussion on the relationship between the design parameters of an array and the interplay of convection, diffusion, and reaction. Moreover, current methods providing such information require extensive computational simulation. Here we demonstrate that the rate of analyte transport to a nanoparticle array can be quantified analytically. We show that such rates are bound by both the rate to a single NP and that to a planar surface (having equivalent size as the array), with the specific rate determined by the fill fraction: the ratio between the total surface area used for biomolecular capture with respect to the entire sensing area. We characterize analyte transport to arrays with respect to changes in numerous parameters relevant to experiment, including variation of the nanoparticle shape and size, packing density, flow conditions, and analyte diffusivity. We also explore how analyte capture is dependent on the kinetic parameters related to an affinity-based biosensor, and furthermore, we classify the conditions under which the array might be diffusion- or reaction-limited. The results obtained herein are applicable toward the design and optimization of all (bio)sensors based on nanoparticle arrays.
Thommes, Markus; Kleinebudde, Peter
2007-11-09
The aim of this study was to systematically evaluate the pelletization process parameters of kappa-carrageenan-containing formulations. The study dealt with the effect of 4 process parameters--screw speed, number of die holes, friction plate speed, and spheronizer temperature--on the pellet properties of shape, size, size distribution, tensile strength, and drug release. These parameters were varied systematically in a 2(4) full factorial design. In addition, 4 drugs--phenacetin, chloramphenicol, dimenhydrinate, and lidocaine hydrochloride--were investigated under constant process conditions. The most spherical pellets were achieved in a high yield by using a large number of die holes and a high spheronizer speed. There was no relevant influence of the investigated process parameters on the size distribution, mechanical stability, and drug release. The poorly soluble drugs, phenacetin and chloramphenicol, resulted in pellets with adequate shape, size, and tensile strength and a fast drug release. The salts of dimenhydrinate and lidocaine affected pellet shape, mechanical stability, and the drug release properties using an aqueous solution of pH 3 as a granulation liquid. In the case of dimenhydrinate, this was attributed to the ionic interactions with kappa-carrageenan, resulting in a stable matrix during dissolution that did not disintegrate. The effect of lidocaine is comparable to the effect of sodium ions, which suppress the gelling of carrageenan, resulting in pellets with fast disintegration and drug release characteristics. The pellet properties are affected by the process parameters and the active pharmaceutical ingredient used.
Sakwanichol, Jarunee; Puttipipatkhachorn, Satit; Ingenerf, Gernot; Kleinebudde, Peter
2012-01-01
Different experimental factorial designs were employed to evaluate granule properties obtained from oscillating granulator and roll mill. Four oscillating-granulator parameters were varied, i.e. rotor speed, oscillating angle, aperture of mesh screen and rotor type. Six roll-mill parameters that were throughput, speed ratio in both first and second stages, gap between roll pair in both stages and roll-surface texture were also investigated. Afterwards, the granule properties obtained from two milling types with similar median particle size were compared. All milling parameters in both milling types affected significantly the median particle size, size distribution and amount of fine particles (P < 0.05), except the rotor types of oscillating granulator on fines. Only three milling parameters influenced significantly the flowability (P < 0.05). These were the throughput and the gap size in the first stage of roll mill and the sieve size of oscillating granulator. In comparison between milling types, the differences of granule properties were not practically relevant. However, the roll mill had much higher capacity than the oscillating granulator about seven times, resulting in improving energy savings per unit of product. Consequently, the roll mill can be applied instead of oscillating granulator for roll compaction/dry granulation technique.
NASA Technical Reports Server (NTRS)
Sen, A. K.; Gupta, A. K. D.; Karmakar, P. K.; Barman, S. D.; Bhattacharya, A. B.; Purkait, N.; Gupta, M. K. D.; Sehra, J. S.
1985-01-01
The advent of satellite communication for global coverage has apparently indicated a renewed interest in the studies of radio wave propagation through the atmosphere, in the VHF, UHF and microwave bands. The extensive measurements of atmosphere constituents, dynamics and radio meterological parameters during the Middle Atmosphere Program (MAP) have opened up further the possibilities of studying tropospheric radio wave propagation parameters, relevant to Earth/space link design. The three basic parameters of significance to radio propagation are thermal emission, absorption and group delay of the atmosphere, all of which are controlled largely by the water vapor content in the atmosphere, particular at microwave bands. As good emitters are also good absorbers, the atmospheric emission as well as the absorption attains a maximum at the frequency of 22.235 GHz, which is the peak of the water vapor line. The group delay is practically independent of frequency in the VHF, UHF and microwave bands. However, all three parameters exhibit a similar seasonal dependence originating presumably from the seasonal dependence of the water vapor content. Some of the interesting results obtained from analyses of radiosonde data over the Indian subcontinent collected by the India Meteorological Department is presented.
Image quality phantom and parameters for high spatial resolution small-animal SPECT
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Harteveld, Anita A.; Meeuwis, Antoi P. W.; Disselhorst, Jonathan A.; Beekman, Freek J.; Oyen, Wim J. G.; Boerman, Otto C.
2011-10-01
At present, generally accepted standards to characterize small-animal single photon emission tomographs (SPECT) do not exist. Whereas for small-animal positron emission tomography (PET), the NEMA NU 4-2008 guidelines are available, such standards are still lacking for small-animal SPECT. More specifically, a dedicated image quality (IQ) phantom and corresponding IQ parameters are absent. The structures of the existing PET IQ phantom are too large to fully characterize the sub-millimeter spatial resolution of modern multi-pinhole SPECT scanners, and its diameter will not fit into all scanners when operating in high spatial resolution mode. We therefore designed and constructed an adapted IQ phantom with smaller internal structures and external diameter, and a facility to guarantee complete filling of the smallest rods. The associated IQ parameters were adapted from NEMA NU 4. An additional parameter, effective whole-body sensitivity, was defined since this was considered relevant in view of the variable size of the field of view and the use of multiple bed positions as encountered in modern small-animal SPECT scanners. The usefulness of the phantom was demonstrated for 99mTc in a USPECT-II scanner operated in whole-body scanning mode using a multi-pinhole mouse collimator with 0.6 mm pinhole diameter.
Jiang, Fangming; Peng, Peng
2016-01-01
Underutilization due to performance limitations imposed by species and charge transports is one of the key issues that persist with various lithium-ion batteries. To elucidate the relevant mechanisms, two groups of characteristic parameters were proposed. The first group contains three characteristic time parameters, namely: (1) te, which characterizes the Li-ion transport rate in the electrolyte phase, (2) ts, characterizing the lithium diffusion rate in the solid active materials, and (3) tc, describing the local Li-ion depletion rate in electrolyte phase at the electrolyte/electrode interface due to electrochemical reactions. The second group contains two electric resistance parameters: Re and Rs, which represent respectively, the equivalent ionic transport resistance and the effective electronic transport resistance in the electrode. Electrochemical modeling and simulations to the discharge process of LiCoO2 cells reveal that: (1) if te, ts and tc are on the same order of magnitude, the species transports may not cause any performance limitations to the battery; (2) the underlying mechanisms of performance limitations due to thick electrode, high-rate operation, and large-sized active material particles as well as effects of charge transports are revealed. The findings may be used as quantitative guidelines in the development and design of more advanced Li-ion batteries. PMID:27599870
Seiniger, Patrick; Bartels, Oliver; Pastor, Claus; Wisch, Marcus
2013-01-01
It is commonly agreed that active safety will have a significant impact on reducing accident figures for pedestrians and probably also bicyclists. However, chances and limitations for active safety systems have only been derived based on accident data and the current state of the art, based on proprietary simulation models. The objective of this article is to investigate these chances and limitations by developing an open simulation model. This article introduces a simulation model, incorporating accident kinematics, driving dynamics, driver reaction times, pedestrian dynamics, performance parameters of different autonomous emergency braking (AEB) generations, as well as legal and logical limitations. The level of detail for available pedestrian accident data is limited. Relevant variables, especially timing of the pedestrian appearance and the pedestrian's moving speed, are estimated using assumptions. The model in this article uses the fact that a pedestrian and a vehicle in an accident must have been in the same spot at the same time and defines the impact position as a relevant accident parameter, which is usually available from accident data. The calculations done within the model identify the possible timing available for braking by an AEB system as well as the possible speed reduction for different accident scenarios as well as for different system configurations. The simulation model identifies the lateral impact position of the pedestrian as a significant parameter for system performance, and the system layout is designed to brake when the accident becomes unavoidable by the vehicle driver. Scenarios with a pedestrian running from behind an obstruction are the most demanding scenarios and will very likely never be avoidable for all vehicle speeds due to physical limits. Scenarios with an unobstructed person walking will very likely be treatable for a wide speed range for next generation AEB systems.
The typological approach to submarine groundwater discharge (SGD)
Bokuniewicz, H.; Buddemeier, R.; Maxwell, B.; Smith, C.
2003-01-01
Coastal zone managers need to factor submarine groundwater discharge (SGD) in their integration. SGD provides a pathway for the transfer of freshwater, and its dissolved chemical burden, from the land to the coastal ocean. SGD reduces salinities and provides nutrients to specialized coastal habitats. It also can be a pollutant source, often undetected, causing eutrophication and triggering nuisance algal blooms. Despite its importance, SGD remains somewhat of a mystery in most places because it is usually unseen and difficult to measure. SGD has been directly measured at only about a hundred sites worldwide. A typology generated by the Land-Ocean Interaction in the Coastal Zone (LOICZ) Project is one of the few tools globally available to coastal resource managers for identifying areas in their jurisdiction where SGD may be a confounding process. (LOICZ is a core project of the International Geosphere/Biosphere Programme.) Of the hundreds of globally distributed parameters in the LOICZ typology, a SGD subset of potentially relevant parameters may be culled. A quantitative combination of the relevant hydrological parameters can serve as a proxy for the SGD conditions not directly measured. Web-LOICZ View, geospatial software then provides an automated approach to clustering these data into groups of locations that have similar characteristics. It permits selection of variables, of the number of clusters desired, and of the clustering criteria, and provides means of testing predictive results against independent variables. Information on the occurrence of a variety of SGD indicators can then be incorporated into regional clustering analysis. With such tools, coastal managers can focus attention on the most likely sites of SGD in their jurisdiction and design the necessary measurement and modeling programs needed for integrated management.
Extraction of Volatiles from Regolith or Soil on Mars, the Moon, and Asteroids
NASA Technical Reports Server (NTRS)
Linne, Diane; Kleinhenz, Julie; Trunek, Andrew; Hoffman, Stephen; Collins, Jacob
2017-01-01
NASA's Advanced Exploration Systems ISRU Technology Project is evaluating concepts to extract water from all resource types Near-term objectives: Produce high-fidelity mass, power, and volume estimates for mining and processing systems Identify critical challenges for development focus Begin demonstration of component and subsystem technologies in relevant environment Several processor types: Closed processors either partially or completely sealed during processing Open air processors operates at Mars ambient conditions In-situ processors Extract product directly without excavation of raw resource Design features Elimination of sweep gas reduces dust particles in water condensate Pressure maintained by height of soil in hopper Model developed to evaluate key design parameters Geometry: conveyor diameter, screw diameter, shaft diameter, flight spacing and pitch Operational: screw speed vs. screw length (residence time) Thermal: Heat flux, heat transfer to soil Testing to demonstrate feasibility and performance Agglomeration, clogging Pressure rise forced flow to condenser.
Strategy for determination of LOD and LOQ values--some basic aspects.
Uhrovčík, Jozef
2014-02-01
The paper is devoted to the evaluation of limit of detection (LOD) and limit of quantification (LOQ) values in concentration domain by using 4 different approaches; namely 3σ and 10σ approaches, ULA2 approach, PBA approach and MDL approach. Brief theoretical analyses of all above mentioned approaches are given together with directions for their practical use. Calculations and correct calibration design are exemplified by using of electrothermal atomic absorption spectrometry for determination of lead in drinking water sample. These validation parameters reached 1.6 μg L(-1) (LOD) and 5.4 μg L(-1) (LOQ) by using 3σ and 10σ approaches. For obtaining relevant values of analyte concentration the influence of calibration design and measurement methodology were examined. The most preferred technique has proven to be a method of preconcentration of the analyte on the surface of the graphite cuvette (boost cycle). © 2013 Elsevier B.V. All rights reserved.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model
NASA Technical Reports Server (NTRS)
Kazeminezhad, F.; Anghaie, S.
2008-01-01
Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.
The Comet Halley dust and gas environment
NASA Technical Reports Server (NTRS)
Divine, N.; Hanner, M. S.; Newburn, R. L., Jr.; Sekanina, Z.; Yeomans, D. K.
1986-01-01
Quantitative descriptions of environments near the nucleus of comet P/Halley have been developed to support spacecraft and mission design for the flyby encounters in March, 1986. To summarize these models as they exist just before the encounters, the relevant data from prior Halley apparitions and from recent cometary research are reviewed. Orbital elements, visual magnitudes, and parameter values and analysis for the nucleus, gas and dust are combined to predict Halley's position, production rates, gas and dust distributions, and electromagnetic radiation field for the current perihelion passage. The predicted numerical results have been useful for estimating likely spacecraft effects, such as impact damage and attitude perturbations. Sample applications are cited, including design of a dust shield for spacecraft structure, and threshold and dynamic range selection for flight experiments. It is expected that the comet's activity may be more irregular than these smoothly varying models predict, and that comparison with the flyby data will be instructive.
Design, clinical translation and immunological response of biomaterials in regenerative medicine
NASA Astrophysics Data System (ADS)
Sadtler, Kaitlyn; Singh, Anirudha; Wolf, Matthew T.; Wang, Xiaokun; Pardoll, Drew M.; Elisseeff, Jennifer H.
2016-07-01
The field of regenerative medicine aims to replace tissues lost as a consequence of disease, trauma or congenital abnormalities. Biomaterials serve as scaffolds for regenerative medicine to deliver cells, provide biological signals and physical support, and mobilize endogenous cells to repair tissues. Sophisticated chemistries are used to synthesize materials that mimic and modulate native tissue microenvironments, to replace form and to elucidate structure-function relationships of cell-material interactions. The therapeutic relevance of these biomaterial properties can only be studied after clinical translation, whereby key parameters for efficacy can be defined and then used for future design. In this Review, we present the development and translation of biomaterials for two tissue engineering targets, cartilage and cornea, both of which lack the ability to self-repair. Finally, looking to the future, we discuss the role of the immune system in regeneration and the potential for biomaterial scaffolds to modulate immune signalling to create a pro-regenerative environment.
Cosmic reionization on computers. I. Design and calibration of simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov
Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolutionmore » reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.« less
Comparative studies of perceived vibration strength for commercial mobile phones.
Lee, Heow Pueh; Lim, Siak Piang
2014-05-01
A mobile phone, also known as cell phone or hand phone, is among the most popular electrical devices used by people all over the world. The present study examines the vibration perception of mobile phones by co-relating the relevant design parameters such as excitation frequency, and size and mass of mobile phones to the vibration perception survey by volunteers. Five popular commercially available mobile phone models were tested. The main findings for the perception surveys were that higher vibration frequency and amplitude of the peak acceleration would result in stronger vibration perception of the mobile phones. A larger contact surface area with the palms and figures, higher peak acceleration and the associated larger peak inertia force may be the main factors for the relatively higher vibration perception. The future design for the vibration alert of the mobile phones is likely to follow this trend. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Ionisation induced collapse of minihaloes
NASA Astrophysics Data System (ADS)
Back, Trevor
2013-08-01
In order to analyse the turbine blade life, the damage due to the combined thermal and mechanical loads should be adequately accounted for. This is more challenging when detailed component geometry is limited. Therefore, a compromise between the level of geometric detail and the complexity of the lifing method to be implemented would be necessary. This research focuses on how the life assessment of aero engine turbine blades can be done, considering the balance between available design inputs and adequate level of fidelity. Accordingly, the thesis contributes to developing a generic turbine blade lifing method that is based on the engine thermodynamic cycle; as well as integrating critical design/technological factors and operational parameters that influence the aero engine blade life. To this end, thermo-mechanical fatigue was identified as the critical damage phenomenon driving the life of the turbine blade.. The developed approach integrates software tools and numerical models created using the minimum design information typically available at the early design stages. Using finite element analysis of an idealised blade geometry, the approach captures relevant impacts of thermal gradients and thermal stresses that contribute to the thermo-mechanical fatigue damage on the gas turbine blade. The blade life is evaluated using the Neu/Sehitoglu thermo-mechanical fatigue model that considers damage accumulation due to fatigue, oxidation, and creep. The leading edge is examined as a critical part of the blade to estimate the damage severity for different design factors and operational parameters. The outputs of the research can be used to better understand how the environment and the operating conditions of the aircraft affect the blade life consumption and therefore what is the impact on the maintenance cost and the availability of the propulsion system. This research also finds that the environmental (oxidation) effect drives the blade life and the blade coolant side was the critical location. Furthermore, a parametric and sensitivity study of the Neu/Sehitoglu model parameters suggests that in addition to four previously reported parameters, the sensitivity of the phasing to oxidation damage would be critical to overall blade life..
Team X Spacecraft Instrument Database Consolidation
NASA Technical Reports Server (NTRS)
Wallenstein, Kelly A.
2005-01-01
In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.
Conceptual design studies of the Electron Cyclotron launcher for DEMO reactor
NASA Astrophysics Data System (ADS)
Moro, Alessandro; Bruschi, Alex; Franke, Thomas; Garavaglia, Saul; Granucci, Gustavo; Grossetti, Giovanni; Hizanidis, Kyriakos; Tigelis, Ioannis; Tran, Minh-Quang; Tsironis, Christos
2017-10-01
A demonstration fusion power plant (DEMO) producing electricity for the grid at the level of a few hundred megawatts is included in the European Roadmap [1]. The engineering design and R&D for the electron cyclotron (EC), ion cyclotron and neutral beam systems for the DEMO reactor is being performed by Work Package Heating and Current Drive (WPHCD) in the framework of EUROfusion Consortium activities. The EC target power to the plasma is about 50 MW, in which the required power for NTM control and burn control is included. EC launcher conceptual design studies are here presented, showing how the main design drivers of the system have been taken into account (physics requirements, reactor relevant operations, issues related to its integration as in-vessel components). Different options for the antenna are studied in a parameters space including a selection of frequencies, injection angles and launch points to get the best performances for the antenna configuration, using beam tracing calculations to evaluate plasma accessibility and deposited power. This conceptual design studies comes up with the identification of possible limits, constraints and critical issues, essential in the selection process of launcher setup solution.
New conducted electrical weapons: Electrical safety relative to relevant standards.
Panescu, Dorin; Nerheim, Max; Kroll, Mark W; Brave, Michael A
2017-07-01
We have previously published about TASER ® conducted electrical weapons (CEW) compliance with international standards. CEWs deliver electrical pulses that can inhibit a person's neuromuscular control or temporarily incapacitate. An eXperimental Rotating-Field (XRF) waveform CEW and the X2 CEW are new 2-shot electrical weapon models designed to target a precise amount of delivered charge per pulse. They both can deploy 1 or 2 dart pairs, delivered by 2 separate cartridges. Additionally, the XRF controls delivery of incapacitating pulses over 4 field vectors, in a rotating sequence. As in our previous study, we were motivated by the need to understand the cardiac safety profile of these new CEWs. The goal of this paper is to analyze the nominal electrical outputs of TASER XRF and X2 CEWs in reference to provisions of all relevant international standards that specify safety requirements for electrical medical devices and electrical fences. Although these standards do not specifically mention CEWs, they are the closest electrical safety standards and hence give very relevant guidance. The outputs of several TASER XRF and X2 CEWs were measured under normal operating conditions. The measurements were compared against manufacturer specifications. CEWs electrical output parameters were reviewed against relevant safety requirements of UL 69, IEC 60335-2-76 Ed 2.1, IEC 60479-1, IEC 60479-2, AS/NZS 60479.1, AS/NZS 60479.2, IEC 60601-1 and BS EN 60601-1. Our study confirmed that the nominal electrical outputs of TASER XRF and X2 CEWs lie within safety bounds specified by relevant standards.
Quantitative interpretations of Visible-NIR reflectance spectra of blood.
Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H
2008-10-27
This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.
Band excitation method applicable to scanning probe microscopy
Jesse, Stephen [Knoxville, TN; Kalinin, Sergei V [Knoxville, TN
2010-08-17
Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.
Band excitation method applicable to scanning probe microscopy
Jesse, Stephen; Kalinin, Sergei V
2013-05-28
Methods and apparatus are described for scanning probe microscopy. A method includes generating a band excitation (BE) signal having finite and predefined amplitude and phase spectrum in at least a first predefined frequency band; exciting a probe using the band excitation signal; obtaining data by measuring a response of the probe in at least a second predefined frequency band; and extracting at least one relevant dynamic parameter of the response of the probe in a predefined range including analyzing the obtained data. The BE signal can be synthesized prior to imaging (static band excitation), or adjusted at each pixel or spectroscopy step to accommodate changes in sample properties (adaptive band excitation). An apparatus includes a band excitation signal generator; a probe coupled to the band excitation signal generator; a detector coupled to the probe; and a relevant dynamic parameter extractor component coupled to the detector, the relevant dynamic parameter extractor including a processor that performs a mathematical transform selected from the group consisting of an integral transform and a discrete transform.
The Role of Transfer in Designing Games and Simulations for Health: Systematic Review.
Kuipers, Derek A; Terlouw, Gijs; Wartena, Bard O; van 't Veer, Job Tb; Prins, Jelle T; Pierie, Jean Pierre En
2017-11-24
The usefulness and importance of serious games and simulations in learning and behavior change for health and health-related issues are widely recognized. Studies have addressed games and simulations as interventions, mostly in comparison with their analog counterparts. Numerous complex design choices have to be made with serious games and simulations for health, including choices that directly contribute to the effects of the intervention. One of these decisions is the way an intervention is expected to lead to desirable transfer effects. Most designs adopt a first-class transfer rationale, whereas the second class of transfer types seems a rarity in serious games and simulations for health. This study sought to review the literature specifically on the second class of transfer types in the design of serious games and simulations. Focusing on game-like interventions for health and health care, this study aimed to (1) determine whether the second class of transfer is recognized as a road for transfer in game-like interventions, (2) review the application of the second class of transfer type in designing game-like interventions, and (3) assess studies that include second-class transfer types reporting transfer outcomes. A total of 6 Web-based databases were systematically searched by titles, abstracts, and keywords using the search strategy (video games OR game OR games OR gaming OR computer simulation*) AND (software design OR design) AND (fidelity OR fidelities OR transfer* OR behaviour OR behavior). The databases searched were identified as relevant to health, education, and social science. A total of 15 relevant studies were included, covering a range of game-like interventions, all more or less mentioning design parameters aimed at transfer. We found 9 studies where first-class transfer was part of the design of the intervention. In total, 8 studies dealt with transfer concepts and fidelity types in game-like intervention design in general; 3 studies dealt with the concept of second-class transfer types and reported effects, and 2 of those recognized transfer as a design parameter. In studies on game-like interventions for health and health care, transfer is regarded as a desirable effect but not as a basic principle for design. None of the studies determined the second class of transfer or instances thereof, although in 3 cases a nonliteral transfer type was present. We also found that studies on game-like interventions for health do not elucidate design choices made and rarely provide design principles for future work. Games and simulations for health abundantly build upon the principles of first-class transfer, but the adoption of second-class transfer types proves scarce. It is likely to be worthwhile to explore the possibilities of second-class transfer types, as they may considerably influence educational objectives in terms of future serious game design for health. ©Derek A Kuipers, Gijs Terlouw, Bard O Wartena, Job TB van 't Veer, Jelle T Prins, Jean Pierre EN Pierie. Originally published in JMIR Serious Games (http://games.jmir.org), 24.11.2017.
Initiation of a Database of CEUS Ground Motions for NGA East
NASA Astrophysics Data System (ADS)
Cramer, C. H.
2007-12-01
The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.
Aguirrebeitia, Josu; Abasolo, Mikel; Müftü, Sinan; Vallejo, Javier
2017-04-01
A previous study investigated the effects of the preload and taper-angle mismatch in tapered implant systems on the removal force characteristics of the self-locking mechanism. The present study builds upon the previous one and introduces the effects of the time elapsed between insertion and removal and the presence of saliva in the implant-abutment interface as 2 new additional parameters. The purpose of this in vitro study was to elucidate the influences of design and clinical parameters on the removal force for implant systems that use tapered interference fit (TIF) type connections by measuring the force needed to remove an abutment from an implant. Ninety-six implants with tapered abutment-implant interfaces specifically built for an unreplicated factorial design were tested on a custom-built workbench for removal force. Four levels were chosen for the preload, F P , and the taper mismatch Δθ; 3 levels for the wait time t; and 2 levels for the saliva presence s at the interface. A regression model was used based on physical reasoning and a theoretical understanding of the interface. A 4-way ANOVA was used to evaluate the influence of the main effects and interactions (α=.05). The experiments strongly indicated that preload, taper mismatch, and saliva presence are relevant variables in removal force. The wait time becomes important when its effect is evaluated along with the preload. The results of this study can be used for decision making in the design and use of TIF type systems. The study supports the use of artificial saliva in any implant design experiment because of its significance in the removal force of the abutment. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Nelson, Stacy; English, Shawn; Briggs, Timothy
2016-05-06
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
NASA Astrophysics Data System (ADS)
Bertoldi, Giacomo; Cordano, Emanuele; Brenner, Johannes; Senoner, Samuel; Della Chiesa, Stefano; Niedrist, Georg
2017-04-01
In mountain regions, the plot- and catchment-scale water and energy budgets are controlled by a complex interplay of different abiotic (i.e. topography, geology, climate) and biotic (i.e. vegetation, land management) controlling factors. When integrated, physically-based eco-hydrological models are used in mountain areas, there are a large number of parameters, topographic and boundary conditions that need to be chosen. However, data on soil and land-cover properties are relatively scarce and do not reflect the strong variability at the local scale. For this reason, tools for uncertainty quantification and optimal parameters identification are essential not only to improve model performances, but also to identify most relevant parameters to be measured in the field and to evaluate the impact of different assumptions for topographic and boundary conditions (surface, lateral and subsurface water and energy fluxes), which are usually unknown. In this contribution, we present the results of a sensitivity analysis exercise for a set of 20 experimental stations located in the Italian Alps, representative of different conditions in terms of topography (elevation, slope, aspect), land use (pastures, meadows, and apple orchards), soil type and groundwater influence. Besides micrometeorological parameters, each station provides soil water content at different depths, and in three stations (one for each land cover) eddy covariance fluxes. The aims of this work are: (I) To present an approach for improving calibration of plot-scale soil moisture and evapotranspiration (ET). (II) To identify the most sensitive parameters and relevant factors controlling temporal and spatial differences among sites. (III) Identify possible model structural deficiencies or uncertainties in boundary conditions. Simulations have been performed with the GEOtop 2.0 model, which is a physically-based, fully distributed integrated eco-hydrological model that has been specifically designed for mountain regions, since it considers the effect of topography on radiation and water fluxes and integrates a snow module. A new automatic sensitivity and optimization tool based on the Particle Swarm Optimization theory has been developed, available as R package on https://github.com/EURAC-Ecohydro/geotopOptim2. The model, once calibrated for soil and vegetation parameters, predicts the plot-scale temporal SMC dynamics of SMC and ET with a RMSE of about 0.05 m3/m3 and 40 W/m2, respectively. However, the model tends to underestimate ET during summer months over apple orchards. Results show how most sensitive parameters are both soil and canopy structural properties. However, ranking is affected by the choice of the target function and local topographic conditions. In particular, local slope/aspect influences results in stations located over hillslopes, but with marked seasonal differences. Results for locations in the valley floor are strongly controlled by the choice of the bottom water flux boundary condition. The poorer model performances in simulating ET over apple orchards could be explained by a model structural deficiency in representing the stomatal control on vapor pressure deficit for this particular type of vegetation. The results of this sensitivity could be extended to other physically distributed models, and also provide valuable insights for optimizing new experimental designs.
Eddison, Nicola; Chockalingam, Nachiappan
2013-04-01
There are a wide variety of ankle foot orthoses used in clinical practice which are characterised by their design, the material used and the stiffness of that material. Changing any of these three components will alter the effect of the ankle foot orthosis on gait. The purpose of this article is to provide an overview on the available research on ankle foot orthosis-footwear combination tuning on the gait characteristics of children with cerebral palsy through a structured review. Literature review. A thorough search of previous studies published in English was conducted within all major databases using relevant phrases without any limits for the dates. These searches were then supplemented by tracking all key references from the appropriate articles identified including hand searching of published books where relevant. To date, there are 947 papers in the literature pertaining to the study of ankle foot orthosis. Of these, 153 investigated the use of ankle foot orthosis for children with cerebral palsy. All the studies included in this review were of a within-subjects design and the evidence levels were generally low. The overall results suggested that ankle foot orthosis-footwear combination tuning has the potential to improve the kinematics and kinetics of gait in children with cerebral palsy. However, the review highlights a lack of well-designed and adequately powered studies. Clinical relevance While the research described in this article indicates an improvement in the gait of children with cerebral palsy following tuning of their ankle foot orthosis-footwear combination, there is still a paucity of research with quantitative data on the effects of kinematics and kinetics of ankle foot orthosis-footwear combination tuning, comparing untuned ankle foot orthosis-footwear combinations with tuned ankle foot orthosis-footwear combination. Furthermore, current research does not identify the effect of tuning on energy efficiency.
Research on Product Conceptual Design Based on Integrated of TRIZ and HOQ
NASA Astrophysics Data System (ADS)
Xie, Jianmin; Tang, Xiaowo; Shao, Yunfei
The conceptual design determines the success of the final product quality and competition of market. The determination of design parameters and the effective method to resolve parameters contradiction are the key to success. In this paper, the concept of HOQ products designed to determine the parameters, then using the TRIZ contradiction matrix and inventive principles of design parameters to solve the problem of contradictions. Facts have proved that the effective method is to obtain the product concept design parameters and to resolve contradictions line parameters.
Melvin, Steven D; Petit, Marie A; Duvignacq, Marion C; Sumpter, John P
2017-08-01
The quality and reproducibility of science has recently come under scrutiny, with criticisms spanning disciplines. In aquatic toxicology, behavioural tests are currently an area of controversy since inconsistent findings have been highlighted and attributed to poor quality science. The problem likely relates to limitations to our understanding of basic behavioural patterns, which can influence our ability to design statistically robust experiments yielding ecologically relevant data. The present study takes a first step towards understanding baseline behaviours in fish, including how basic choices in experimental design might influence behavioural outcomes and interpretations in aquatic toxicology. Specifically, we explored how fish acclimate to behavioural arenas and how different lengths of observation time impact estimates of basic swimming parameters (i.e., average, maximum and angular velocity). We performed a semi-quantitative literature review to place our findings in the context of the published literature describing behavioural tests with fish. Our results demonstrate that fish fundamentally change their swimming behaviour over time, and that acclimation and observational timeframes may therefore have implications for influencing both the ecological relevance and statistical robustness of behavioural toxicity tests. Our review identified 165 studies describing behavioural responses in fish exposed to various stressors, and revealed that the majority of publications documenting fish behavioural responses report extremely brief acclimation times and observational durations, which helps explain inconsistencies identified across studies. We recommend that researchers applying behavioural tests with fish, and other species, apply a similar framework to better understand baseline behaviours and the implications of design choices for influencing study outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
MULTI-OBJECTIVE ONLINE OPTIMIZATION OF BEAM LIFETIME AT APS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yipeng
In this paper, online optimization of beam lifetime at the APS (Advanced Photon Source) storage ring is presented. A general genetic algorithm (GA) is developed and employed for some online optimizations in the APS storage ring. Sextupole magnets in 40 sectors of the APS storage ring are employed as variables for the online nonlinear beam dynamics optimization. The algorithm employs several optimization objectives and is designed to run with topup mode or beam current decay mode. Up to 50\\% improvement of beam lifetime is demonstrated, without affecting the transverse beam sizes and other relevant parameters. In some cases, the top-upmore » injection efficiency is also improved.« less
Dielectric elastomer actuators used for pneumatic valve technology
NASA Astrophysics Data System (ADS)
Giousouf, Metin; Kovacs, Gabor
2013-10-01
Dielectric elastomer actuators have been investigated for applications in the field of pneumatic automation technology. We have developed different valve designs with stacked dielectric elastomer actuators and with integrated high voltage converters. The actuators were made using VHB-4910 material and a stacker machine for automated fabrication of the cylindrical actuators. Typical characteristics of pneumatic valves such as flow rate, power consumption and dynamic behaviour are presented. For valve construction the force and stroke parameters of the dielectric elastomer actuator have been measured. Further, benefits for valve applications using dielectric elastomers are shown as well as their potential operational area. Finally, challenges are discussed that are relevant for the use of elastomer actuators in valves for industrial applications.
Information filtering based on transferring similarity.
Sun, Duo; Zhou, Tao; Liu, Jian-Guo; Liu, Run-Ran; Jia, Chun-Xiao; Wang, Bing-Hong
2009-07-01
In this Brief Report, we propose an index of user similarity, namely, the transferring similarity, which involves all high-order similarities between users. Accordingly, we design a modified collaborative filtering algorithm, which provides remarkably higher accurate predictions than the standard collaborative filtering. More interestingly, we find that the algorithmic performance will approach its optimal value when the parameter, contained in the definition of transferring similarity, gets close to its critical value, before which the series expansion of transferring similarity is convergent and after which it is divergent. Our study is complementary to the one reported in [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)], and is relevant to the missing link prediction problem.
NASA Astrophysics Data System (ADS)
Walbaum, T.; Fallnich, C.
2012-07-01
We present the tuning of multimode interference bandpass filters made of standard fibers by mechanical bending. Our setup allows continuous adjustment of the bending radius from infinity down to about 5 cm. The impact of bending on the transmission spectrum and on polarization is investigated experimentally, and a filter with a continuous tuning range of 13.6 nm and 86 % peak transmission was realized. By use of numerical simulations employing a semi-analytical mode expansion approach, we obtain quantitative understanding of the underlying physics. Further breakdown of the governing equations enables us to identify the fiber parameters that are relevant for the design of customized filters.
Flight Envelope Information-Augmented Display for Enhanced Pilot Situation Awareness
NASA Technical Reports Server (NTRS)
Ackerman, Kasey A.; Seefeldt, Benjamin D.; Xargay, Enric; Talleur, Donald A.; Carbonari, Ronald S.; Kirlik, Alex; Hovakimyan, Naira; Trujillo, Anna C.; Belcastro, Christine M.; Gregory, Irene M.
2015-01-01
This paper presents an interface system display which is conceived to improve pilot situation awareness with respect to a flight envelope protection system developed for a mid-sized transport aircraft. The new display is designed to complement existing cockpit displays, and to augment them with information that relates to both aircraft state and the control automation itself. In particular, the proposed display provides cues about the state of automation directly in terms of pilot control actions, in addition to flight parameters. The paper also describes a forthcoming evaluation test plan that is intended to validate the developed interface by assessing the relevance of the displayed information, as well as the adequacy of the display layout.
Honzík, Petr; Podkovskiy, Alexey; Durand, Stéphane; Joly, Nicolas; Bruneau, Michel
2013-11-01
The main purpose of the paper is to contribute at presenting an analytical and a numerical modeling which would be relevant for interpreting the couplings between a circular membrane, a peripheral cavity having the same external radius as the membrane, and a thin air gap (with a geometrical discontinuity between them), and then to characterize small scale electrostatic receivers and to propose procedures that could be suitable for fitting adjustable parameters to achieve optimal behavior in terms of sensitivity and bandwidth expected. Therefore, comparison between these theoretical methods and characterization of several shapes is dealt with, which show that the models would be appropriate to address the design of such transducers.
A new spectrometer for total reflection X-ray fluorescence analysis of light elements
NASA Astrophysics Data System (ADS)
Streli, Christina; Wobrauschek, Peter; Unfried, Ernst; Aiginger, Hannes
1993-10-01
A new spectrometer for total reflection X-ray fluorescence analysis (TXRF) of light elements as C, N, O, F, Na,… has been designed, constructed and realized. This was done under the aspect of optimizing all relevant parameters for excitation and detection under the conditions of Total Reflection in a vacuum chamber. A commercially available Ge(HP) detector with a diamond window offering a high transparency for low energy radiation was used. As excitation sources a special self-made windowless X-ray tube with Cu-target as well as a standard fine-focus Cr-tube were applied. Detection limits achieved are in the ng range for Carbon and Oxygen.
Quantum Memristors with Superconducting Circuits
Salmilehto, J.; Deppe, F.; Di Ventra, M.; ...
2017-02-14
Memristors are resistive elements retaining information of their past dynamics. They have garnered substantial interest due to their potential for representing a paradigm change in electronics, information processing and unconventional computing. Given the advent of quantum technologies, a design for a quantum memristor with superconducting circuits may be envisaged. Along these lines, we introduce such a quantum device whose memristive behavior arises from quasiparticle-induced tunneling when supercurrents are cancelled. Here in this paper, for realistic parameters, we find that the relevant hysteretic behavior may be observed using current state-of-the-art measurements of the phase-driven tunneling current. Finally, we develop suitable methodsmore » to quantify memory retention in the system.« less
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Physics Criteria for a Subscale Plasma Liner Experiment
Hsu, Scott C.; Thio, Yong C. Francis
2018-02-02
Spherically imploding plasma liners, formed by merging hypersonic plasma jets, are a proposed standoff driver to compress magnetized target plasmas to fusion conditions (Hsu et al. in IEEE Trans Plasma Sci 40:1287, 2012). Here, in this paper, the parameter space and physics criteria are identified for a subscale, plasma-liner-formation experiment to provide data, e.g., on liner ram-pressure scaling and uniformity, that are relevant for addressing scientific issues of full-scale plasma liners required to achieve fusion conditions. Lastly, based on these criteria, we quantitatively estimate the minimum liner kinetic energy and mass needed, which informed the design of a subscale plasmamore » liner experiment now under development.« less
A fuel cycle assessment guide for utility and state energy planners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-07-01
This guide, one in a series of documents designed to help assess fuel cycles, is a framework for setting parameters, collecting data, and analyzing fuel cycles for supply-side and demand-side management. It provides an automated tool for entering comparative fuel cycle data that are meaningful to state and utility integrated resource planning, collaborative, and regional energy planning activities. It outlines an extensive range of energy technology characteristics and environmental, social, and economic considerations within each stage of a fuel cycle. The guide permits users to focus on specific stages or effects that are relevant to the technology being evaluated andmore » that meet the user`s planning requirements.« less
Observer enhanced control for spin-stabilized tethered formation in earth orbit
NASA Astrophysics Data System (ADS)
Guang, Zhai; Yuyang, Li; Liang, Bin
2018-04-01
This paper addresses the issues relevant to control of spin-stabilized tethered formation in circular orbit. Due to the dynamic complexities and nonlinear perturbations, it is challenging to promote the control precision for the formation deployment and maintenance. In this work, the formation dynamics are derived with considering the spinning rate of the central body, then major attention is dedicated to develop the nonlinear disturbance observer. To achieve better control performance, the observer-enhanced controller is designed by incorporating the disturbance observer into the control loop, benefits from the disturbance compensation are demonstrated, and also, the dependences of the disturbance observer performance on some important parameters are theoretically and numerically analyzed.
Physics Criteria for a Subscale Plasma Liner Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Scott C.; Thio, Yong C. Francis
Spherically imploding plasma liners, formed by merging hypersonic plasma jets, are a proposed standoff driver to compress magnetized target plasmas to fusion conditions (Hsu et al. in IEEE Trans Plasma Sci 40:1287, 2012). Here, in this paper, the parameter space and physics criteria are identified for a subscale, plasma-liner-formation experiment to provide data, e.g., on liner ram-pressure scaling and uniformity, that are relevant for addressing scientific issues of full-scale plasma liners required to achieve fusion conditions. Lastly, based on these criteria, we quantitatively estimate the minimum liner kinetic energy and mass needed, which informed the design of a subscale plasmamore » liner experiment now under development.« less
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, L.; Yang, K.; Chen, Z.
1999-07-01
The distribution of solar radiant energy inside the specific air-conditioned automobile chamber is studied on the basis of the unique wavelength spectrum. Some important optical parameters of the internal materials are mostly determined by experiments with monochromator, electron-multiplier phototube, etc. Some optical parameters of the thin transparent object are analyzed theoretically. Based on random model, Monte Carlo method is adopted to get the detailed distribution of solar radiant energy. The procedures of absorptivity, reflection and transmission of each ray are simulated and traced during the calculation. The universal software calculates two cases with different kind of glass. The relevant resultsmore » show the importance of solar radiant energy on the thermal environment inside the air-conditioned automobile chamber. Furthermore, the necessity of shield quality of the automobile glass is also obvious. This study is also the basis of the following researches on fluid and temperature fields. The results are also useful for further thermal comfort design.« less
Exclusive queueing model including the choice of service windows
NASA Astrophysics Data System (ADS)
Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro
2018-01-01
In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.
Active stability augmentation of large space structures: A stochastic control problem
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1987-01-01
A problem in SCOLE is that of slewing an offset antenna on a long flexible beam-like truss attached to the space shuttle, with rather stringent pointing accuracy requirements. The relevant methodology aspects in robust feedback-control design for stability augmentation of the beam using on-board sensors is examined. It is framed as a stochastic control problem, boundary control of a distributed parameter system described by partial differential equations. While the framework is mathematical, the emphasis is still on an engineering solution. An abstract mathematical formulation is developed as a nonlinear wave equation in a Hilbert space. That the system is controllable is shown and a feedback control law that is robust in the sense that it does not require quantitative knowledge of system parameters is developed. The stochastic control problem that arises in instrumenting this law using appropriate sensors is treated. Using an engineering first approximation which is valid for small damping, formulas for optimal choice of the control gain are developed.
Barreira, João C M; Casal, Susana; Ferreira, Isabel C F R; Peres, António M; Pereira, José Alberto; Oliveira, M Beatriz P P
2012-09-26
Almonds harvested in three years in Trás-os-Montes (Portugal) were characterized to find differences among Protected Designation of Origin (PDO) Amêndoa Douro and commercial non-PDO cultivars. Nutritional parameters, fiber (neutral and acid detergent fibers, acid detergent lignin, and cellulose), fatty acids, triacylglycerols (TAG), and tocopherols were evaluated. Fat was the major component, followed by carbohydrates, protein, and moisture. Fatty acids were mostly detected as monounsaturated and polyunsaturated forms, with relevance of oleic and linoleic acids. Accordingly, 1,2,3-trioleoylglycerol and 1,2-dioleoyl-3-linoleoylglycerol were the major TAG. α-Tocopherol was the leading tocopherol. To verify statistical differences among PDO and non-PDO cultivars independent of the harvest year, data were analyzed through an analysis of variance, a principal component analysis, and a linear discriminant analysis (LDA). These differences identified classification parameters, providing an important tool for authenticity purposes. The best results were achieved with TAG analysis coupled with LDA, which proved its effectiveness to discriminate almond cultivars.
Improvement of water treatment pilot plant with Moringa oleifera extract as flocculant agent.
Beltrán-Heredia, J; Sánchez-Martín, J
2009-05-01
Moringa oleifera extract is a high-capacity flocculant agent for turbidity removal in surface water treatment. A complete study of a pilot-plant installation has been carried out. Because of flocculent sedimentability of treated water, a residual turbidity occured in the pilot plant (around 30 NTU), which could not be reduced just by a coagulation-flocculation-sedimentation process. Because of this limitation, the pilot plant (excluded filtration) achieved a turbidity removal up to 70%. A slow sand filter was put in as a complement to installation. A clogging process was characterized, according to Carman-Kozeny's hydraulic hypothesis. Kozeny's k parameter was found to be 4.18. Through fouling stages, this k parameter was found to be up to 6.36. The obtained data are relevant for the design of a real filter in a continuous-feeding pilot plant. Slow sand filtration is highly recommended owing to its low cost, easy-handling and low maintenance, so it is a very good complement to Moringa water treatment in developing countries.
Synchrony suppression in ensembles of coupled oscillators via adaptive vanishing feedback.
Montaseri, Ghazal; Yazdanpanah, Mohammad Javad; Pikovsky, Arkady; Rosenblum, Michael
2013-09-01
Synchronization and emergence of a collective mode is a general phenomenon, frequently observed in ensembles of coupled self-sustained oscillators of various natures. In several circumstances, in particular in cases of neurological pathologies, this state of the active medium is undesirable. Destruction of this state by a specially designed stimulation is a challenge of high clinical relevance. Typically, the precise effect of an external action on the ensemble is unknown, since the microscopic description of the oscillators and their interactions are not available. We show that, desynchronization in case of a large degree of uncertainty about important features of the system is nevertheless possible; it can be achieved by virtue of a feedback loop with an additional adaptation of parameters. The adaptation also ensures desynchronization of ensembles with non-stationary, time-varying parameters. We perform the stability analysis of the feedback-controlled system and demonstrate efficient destruction of synchrony for several models, including those of spiking and bursting neurons.
Synchrony suppression in ensembles of coupled oscillators via adaptive vanishing feedback
NASA Astrophysics Data System (ADS)
Montaseri, Ghazal; Javad Yazdanpanah, Mohammad; Pikovsky, Arkady; Rosenblum, Michael
2013-09-01
Synchronization and emergence of a collective mode is a general phenomenon, frequently observed in ensembles of coupled self-sustained oscillators of various natures. In several circumstances, in particular in cases of neurological pathologies, this state of the active medium is undesirable. Destruction of this state by a specially designed stimulation is a challenge of high clinical relevance. Typically, the precise effect of an external action on the ensemble is unknown, since the microscopic description of the oscillators and their interactions are not available. We show that, desynchronization in case of a large degree of uncertainty about important features of the system is nevertheless possible; it can be achieved by virtue of a feedback loop with an additional adaptation of parameters. The adaptation also ensures desynchronization of ensembles with non-stationary, time-varying parameters. We perform the stability analysis of the feedback-controlled system and demonstrate efficient destruction of synchrony for several models, including those of spiking and bursting neurons.
Numerical study on injection parameters optimization of thin wall and biodegradable polymers parts
NASA Astrophysics Data System (ADS)
Santos, C.; Mendes, A.; Carreira, P.; Mateus, A.; Malça, C.
2017-07-01
Nowadays, the molds industry searches new markets, with diversified and added value products. The concept associated to the production of thin walled and biodegradable parts mostly manufactured by injection process has assumed a relevant importance due to environmental and economic factors. The growth of a global consciousness about the harmful effects of the conventional polymers in our life quality associated with the legislation imposed, become key factors for the choice of a particular product by the consumer. The target of this work is to provide an integrated solution for the injection of parts with thin walls and manufactured using biodegradable materials. This integrated solution includes the design and manufacture processes of the mold as well as to find the optimum values for the injection parameters in order to become the process effective and competitive. For this, the Moldflow software was used. It was demonstrated that this computational tool provides an effective responsiveness and it can constitute an important tool in supporting the injection molding of thin-walled and biodegradable parts.
Scaling and Systems Considerations in Pulsed Inductive Thrusters
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.
2007-01-01
Performance scaling in pulsed inductive thrusters is discussed in the context of previous experimental studies and modeling results. Two processes, propellant ionization and acceleration, are interconnected where overall thruster performance and operation are concerned, but they are separated here to gain physical insight into each process and arrive at quantitative criteria that should be met to address or mitigate inherent inductive thruster difficulties. The effects of preionization in lowering the discharge energy requirements relative to a case where no preionization is employed, and in influencing the location of the initial current sheet, are described. The relevant performance scaling parameters for the acceleration stage are reviewed, emphasizing their physical importance and the numerical values required for efficient acceleration. The scaling parameters are then related to the design of the pulsed power train providing current to the acceleration stage. The impact of various choices in pulsed power train and circuit topology selection are reviewed, paying special attention to how these choices mitigate or exacerbate switching, lifetime, and power consumption issues.
Pettersson, Martin; Hou, Xinjun; Kuhn, Max; Wager, Travis T; Kauffman, Gregory W; Verhoest, Patrick R
2016-06-09
Strategic replacement of one or more hydrogen atoms with fluorine atom(s) is a common tactic to improve potency at a given target and/or to modulate parameters such as metabolic stability and pKa. Molecular weight (MW) is a key parameter in design, and incorporation of fluorine is associated with a disproportionate increase in MW considering the van der Waals radius of fluorine versus hydrogen. Herein we examine a large compound data set to understand the effect of introducing fluorine on the risk of encountering P-glycoprotein mediated efflux (as measured by MDR efflux ratio), passive permeability, lipophilicity, and metabolic stability. Statistical modeling of the MDR ER data demonstrated that an increase in MW as a result of introducing fluorine atoms does not lead to higher risk of P-gp mediated efflux. Fluorine-corrected molecular weight (MWFC), where the molecular weight of fluorine has been subtracted, was found to be a more relevant descriptor.
Bandgaps and directional propagation of elastic waves in 2D square zigzag lattice structures
NASA Astrophysics Data System (ADS)
Wang, Yan-Feng; Wang, Yue-Sheng; Zhang, Chuanzeng
2014-12-01
In this paper we propose various types of two-dimensional (2D) square zigzag lattice structures, and we study their bandgaps and directional propagation of elastic waves. The band structures and the transmission spectra of the systems are calculated by using the finite element method. The effects of the geometry parameters of the 2D-zigzag lattices on the bandgaps are investigated and discussed. The mechanism of the bandgap generation is analyzed by studying the vibration modes at the bandgap edges. Multiple wide complete bandgaps are found in a wide porosity range owing to the separation of the degeneracy by introducing bending arms. The bandgaps are sensitive to the geometry parameters of the systems. The deformed displacement fields of the transient response of finite structures subjected to time-harmonic loads are presented to show the directional wave propagation. The research in this paper is relevant to the practical design of cellular structures with enhanced vibro-acoustics performance.
Support for fast comprehension of ICU data: visualization using metaphor graphics.
Horn, W; Popow, C; Unterasinger, L
2001-01-01
The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.
Retrieving relevant time-course experiments: a study on Arabidopsis microarrays.
Şener, Duygu Dede; Oğul, Hasan
2016-06-01
Understanding time-course regulation of genes in response to a stimulus is a major concern in current systems biology. The problem is usually approached by computational methods to model the gene behaviour or its networked interactions with the others by a set of latent parameters. The model parameters can be estimated through a meta-analysis of available data obtained from other relevant experiments. The key question here is how to find the relevant experiments which are potentially useful in analysing current data. In this study, the authors address this problem in the context of time-course gene expression experiments from an information retrieval perspective. To this end, they introduce a computational framework that takes a time-course experiment as a query and reports a list of relevant experiments retrieved from a given repository. These retrieved experiments can then be used to associate the environmental factors of query experiment with the findings previously reported. The model is tested using a set of time-course Arabidopsis microarrays. The experimental results show that relevant experiments can be successfully retrieved based on content similarity.
Boily, Michaël; Dussault, Catherine; Massicotte, Julie; Guibord, Pascal; Lefebvre, Marc
2015-01-23
To demonstrate bioequivalence (BE) between two prolonged-release (PR) drug formulations, single dose studies under fasting and fed state as well as at least one steady-state study are currently required by the European Medicines Agency (EMA). Recently, however, there have been debates regarding the relevance of steady-state studies. New requirements in single-dose investigations have also been suggested by the EMA to address the absence of a parameter that can adequately assess the equivalence of the shape of the curves. In the draft guideline issued in 2013, new partial area under the curve (pAUC) pharmacokinetic (PK) parameters were introduced to that effect. In light of these potential changes, there is a need of supportive clinical evidence to evaluate the impact of pAUCs on the evaluation of BE between PR formulations. In this retrospective analysis, it was investigated whether the newly defined parameters were associated with an increase in discriminatory ability or a change in variability compared to the conventional PK parameters. Among the single dose studies that met the requirements already in place, 20% were found unable to meet the EMA's new requirements in regards to the pAUC PK parameters. When pairing fasting and fed studies for a same formulation, the failure rate increased to 40%. In some cases, due to the high variability of these parameters, an increase of the sample size would be required to prove BE. In other cases however, the pAUC parameters demonstrated a robust ability to detect differences between the shapes of the curves of PR formulations. The present analysis should help to better understand the impact of the upcoming changes in European regulations on PR formulations and in the design of future BE studies. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rai, P.; Gautam, N.; Chandra, H.
2018-06-01
This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.
Hanke, Alexander T; Tsintavi, Eleni; Ramirez Vazquez, Maria Del Pilar; van der Wielen, Luuk A M; Verhaert, Peter D E M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2016-09-01
Knowledge-based development of chromatographic separation processes requires efficient techniques to determine the physicochemical properties of the product and the impurities to be removed. These characterization techniques are usually divided into approaches that determine molecular properties, such as charge, hydrophobicity and size, or molecular interactions with auxiliary materials, commonly in the form of adsorption isotherms. In this study we demonstrate the application of a three-dimensional liquid chromatography approach to a clarified cell homogenate containing a therapeutic enzyme. Each separation dimension determines a molecular property relevant to the chromatographic behavior of each component. Matching of the peaks across the different separation dimensions and against a high-resolution reference chromatogram allows to assign the determined parameters to pseudo-components, allowing to determine the most promising technique for the removal of each impurity. More detailed process design using mechanistic models requires isotherm parameters. For this purpose, the second dimension consists of multiple linear gradient separations on columns in a high-throughput screening compatible format, that allow regression of isotherm parameters with an average standard error of 8%. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1283-1291, 2016. © 2016 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Tempeler, J.; Danylyuk, S.; Brose, S.; Loosen, P.; Juschkin, L.
2018-07-01
In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 1010 cm‑2.
Facial Video-Based Photoplethysmography to Detect HRV at Rest.
Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L
2015-06-01
Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Rai, P.; Gautam, N.; Chandra, H.
2018-02-01
This work deals with the analysis and modification of operational parameters for meeting the emission standards, set by Central Pollution Control Board (CPCB)/State Pollution Control Board (SPCB) from time to time of electrostatic precipitator (ESP). The analysis is carried out by using standard chemical analysis supplemented by the relevant data collected from Korba East Phase (Ph)-III thermal power plant, under Chhattisgarh State Electricity Board (CSEB) operating at Korba, Chhattisgarh. Chemical analysis is used to predict the emission level for different parameters of ESP. The results reveal that for a constant outlet PM concentration and fly ash percentage, the total collection area decreases with the increase in migration velocity. For constant migration velocity and outlet PM concentration, the total collection area increases with the increase in the fly ash percent. For constant migration velocity and outlet e PM concentration, the total collection area increases with the ash content in the coal. i.e. from minimum ash to maximum ash. As far as the efficiency is concerned, it increases with the fly ash percent, ash content and the inlet dust concentration but decreases with the outlet PM concentration at constant migration velocity, fly ash and ash content.
Tempeler, J; Danylyuk, S; Brose, S; Loosen, P; Juschkin, L
2018-07-06
In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 10 10 cm -2 .
NASA Astrophysics Data System (ADS)
Peretyagin, Vladimir S.; Korolev, Timofey K.; Chertov, Aleksandr N.
2017-02-01
The problems of dressability the solid minerals are attracted attention of specialists, where the extraction of mineral raw materials is a significant sector of the economy. There are a significant amount of mineral ore dressability methods. At the moment the radiometric dressability methods are considered the most promising. One of radiometric methods is method photoluminescence. This method is based on the spectral analysis, amplitude and kinetic parameters luminescence of minerals (under UV radiation), as well as color parameters of radiation. The absence of developed scientific and methodological approaches of analysis irradiation area to UV radiation as well as absence the relevant radiation sources are the factors which hinder development and use of photoluminescence method. The present work is devoted to the development of multi-element UV radiation source designed for the solution problem of analysis and sorting minerals by their selective luminescence. This article is presented a method of theoretical modeling of the radiation devices based on UV LEDs. The models consider such factors as spectral component, the spatial and energy parameters of the LEDs. Also, this article is presented the results of experimental studies of the some samples minerals.
Banerjee, Rupak K; Peelukhana, Srikara V; Goswami, Ishan
2014-02-07
The decision to perform intervention on a patient with coronary stenosis is often based on functional diagnostic parameters obtained from pressure and flow measurements using sensor-tipped guidewire at maximal vasodilation (hyperemia). Recently, a rapid exchange Monorail Pressure Sensor catheter of 0.022″ diameter (MPS22), with pressure sensor at distal end has been developed for improved assessment of stenosis severity. The hollow shaft of the MPS22 is designed to slide over any standard 0.014″ guidewire (G14). Hence, influence of MPS22 diameter on coronary diagnostic parameters needs investigation. An in vitro experiment was conducted to replicate physiologic flows in three representative area stenosis (AS): mild (64% AS), intermediate (80% AS), and severe (90% AS), for two arterial diameters, 3mm (N2; more common) and 2.5mm (N1). Influence of MPS22 on diagnostic parameters: fractional flow reserve (FFR) and pressure drop coefficient (CDP) was evaluated both at hyperemic and basal conditions, while comparing it with G14. The FFR values decreased for the MPS22 in comparison to G14, (Mild: 0.87 vs 0.88, Intermediate: 0.68 vs 0.73, Severe: 0.48 vs 0.56) and CDP values increased (Mild: 16 vs 14, Intermediate: 75 vs 56, Severe: 370 vs 182) for N2. Similar trend was observed in the case of N1. The FFR values were found to be well above (mild) and below (intermediate and severe) the diagnostic cut-off of 0.75. Therefore, MPS22 catheter can be used as a possible alternative to G14. Further, irrespective of the MPS22 or G14, basal FFR (FFRb) had overlapping ranges in close proximity for clinically relevant mild and intermediate stenoses that will lead to diagnostic uncertainty under both N1 and N2. However, CDPb had distinct ranges for different stenosis severities and could be a potential diagnostic parameter under basal conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Relevance similarity: an alternative means to monitor information retrieval systems
Dong, Peng; Loh, Marie; Mondry, Adrian
2005-01-01
Background Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group. Methods The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic. Results The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set. Conclusion In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data. PMID:16029513
Sampling in ecology and evolution - bridging the gap between theory and practice
Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.
2010-01-01
Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of both sampling theory and field work logistics. ?? 2010 The Authors.
Design and Printing Strategies in 3D Bioprinting of Cell-Hydrogels: A Review.
Lee, Jia Min; Yeong, Wai Yee
2016-11-01
Bioprinting is an emerging technology that allows the assembling of both living and non-living biological materials into an ideal complex layout for further tissue maturation. Bioprinting aims to produce engineered tissue or organ in a mechanized, organized, and optimized manner. Various biomaterials and techniques have been utilized to bioprint biological constructs in different shapes, sizes and resolutions. There is a need to systematically discuss and analyze the reported strategies employed to fabricate these constructs. We identified and discussed important design factors in bioprinting, namely shape and resolution, material heterogeneity, and cellular-material remodeling dynamism. Each design factors are represented by the corresponding process capabilities and printing parameters. The process-design map will inspire future biomaterials research in these aspects. Design considerations such as data processing, bio-ink formulation and process selection are discussed. Various printing and crosslinking strategies, with relevant applications, are also systematically reviewed. We categorized them into 5 general bioprinting strategies, including direct bioprinting, in-process crosslinking, post-process crosslinking, indirect bioprinting and hybrid bioprinting. The opportunities and outlook in 3D bioprinting are highlighted. This review article will serve as a framework to advance computer-aided design in bioprinting technologies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
NASA Astrophysics Data System (ADS)
Federici, Gianfranco; Raffray, A. René
1997-04-01
The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.
Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.
An Analytical Solution for Transient Thermal Response of an Insulated Structure
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated aerospace vehicle structure subjected to a simplified heat pulse. This simplified problem approximates the thermal response of a thermal protection system of an atmospheric entry vehicle. The exact analytical solution is solely a function of two non-dimensional parameters. A simpler function of these two parameters was developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Using these techniques, the maximum structural temperature rise was calculated using the analytical solutions and shown to typically agree with finite element simulations within 10 to 20 percent over the relevant range of parameters studied.
Physiological Parameters for Oral Delivery and In vitro Testing
Mudie, Deanna M.; Amidon, Gordon L.; Amidon, Gregory E.
2010-01-01
Pharmaceutical solid oral dosage forms must undergo dissolution in the intestinal fluids of the gastrointestinal tract before they can be absorbed and reach the systemic circulation. Therefore, dissolution is a critical part of the drug-delivery process. The rate and extent of drug dissolution and absorption depend on the characteristics of the active ingredient as well as properties of the dosage form. Just as importantly, characteristics of the physiological environment such as buffer species, pH, bile salts, gastric emptying rate, intestinal motility, and hydrodynamics can significantly impact dissolution and absorption. While significant progress has been made since 1970 when the first compendial dissolution test was introduced (USP Apparatus 1), current dissolution testing does not take full advantage of the extensive physiologic information that is available. For quality control purposes, where the question is one of lot-to-lot consistency in performance, using nonphysiologic test conditions that match drug and dosage form properties with practical dissolution media and apparatus may be appropriate. However, where in vitro – in vivo correlations are desired, it is logical to consider and utilize knowledge of the in vivo condition. This publication critically reviews the literature that is relevant to oral human drug delivery. Physiologically relevant information must serve as a basis for the design of dissolution test methods and systems that are more representative of the human condition. As in vitro methods advance in their physiological relevance, better in vitro - in vivo correlations will be possible. This will, in turn, lead to in vitro systems that can be utilized to more effectively design dosage forms that have improved and more consistent oral bioperformance. PMID:20822152
Trends in Culturally Relevant Interface Design Features for Latino Web Site Users
ERIC Educational Resources Information Center
Sachau, Lori L.; Hutchinson, Susan R.
2012-01-01
There is a lack of published research on designing Web-based instruction for the adult U.S. Latino population. Instructional designers need guidance on how to design culturally relevant learning environments for this audience, particularly for Latino people from Mexican heritage. The authors used content analysis to investigate the extent to which…
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Physical Simulation for Low-Energy Astrobiology Environmental Scenarios
NASA Astrophysics Data System (ADS)
Gormly, Sherwin; Adams, V. D.; Marchand, Eric
2003-12-01
Speculations about the extent of life of independent origin and the potential for sustaining Earth-based life in subsurface environments on both Europa and Mars are of current and relevant interest. Theoretical modeling based on chemical energetics has demonstrated potential options for viable biochemical metabolism (metabolic pathways) in these types of environments. Also, similar environments on Earth show microbial activity. However, actual physical simulation testing of specific environments is required to confidently determine the interplay of various physical and chemical parameters on the viability of relevant metabolic pathways. This testing is required to determine the potential to sustain life in these environments on a specific scenario by scenario basis. This study examines the justification, design, and fabrication of, as well as the culture selection and screening for, a psychrophilic/halophilic/anaerobic digester. This digester is specifically designed to conform to physical testing needs of research relating to potential extent physical environments on Europa and other planetary bodies in the Solar System. The study is a long-term effort and is currently in an early phase, with only screening-level data at this time. Full study results will likely take an additional 2 years. However, researchers in electromagnetic biosignature and in situ instrument development should be aware of the study at this time, as they are invited to participate in planning for future applications of the digester facility.
NASA Astrophysics Data System (ADS)
Kuenzig, Thomas; Dehé, Alfons; Krumbein, Ulrich; Schrag, Gabriele
2018-05-01
Maxing out the technological limits in order to satisfy the customers’ demands and obtain the best performance of micro-devices and-systems is a challenge of today’s manufacturers. Dedicated system simulation is key to investigate the potential of device and system concepts in order to identify the best design w.r.t. the given requirements. We present a tailored, physics-based system-level modeling approach combining lumped with distributed models that provides detailed insight into the device and system operation at low computational expense. The resulting transparent, scalable (i.e. reusable) and modularly composed models explicitly contain the physical dependency on all relevant parameters, thus being well suited for dedicated investigation and optimization of MEMS devices and systems. This is demonstrated for an industrial capacitive silicon microphone. The performance of such microphones is determined by distributed effects like viscous damping and inhomogeneous capacitance variation across the membrane as well as by system-level phenomena like package-induced acoustic effects and the impact of the electronic circuitry for biasing and read-out. The here presented model covers all relevant figures of merit and, thus, enables to evaluate the optimization potential of silicon microphones towards high fidelity applications. This work was carried out at the Technical University of Munich, Chair for Physics of Electrotechnology. Thomas Kuenzig is now with Infineon Technologies AG, Neubiberg.
Díaz Lantada, Andrés; Alarcón Iniesta, Hernán; García-Ruíz, Josefa Predestinación
2016-02-01
Articular repair is a relevant and challenging area for the emerging fields of tissue engineering and biofabrication. The need of significant gradients of properties, for the promotion of osteochondral repair, has led to the development of several families of composite biomaterials and scaffolds, using different effective approaches, although a perfect solution has not yet been found. In this study we present the design, modeling, rapid manufacturing and in vitro testing of a composite scaffold aimed at osteochondral repair. The presented composite scaffold stands out for having a functional gradient of density and stiffness in the bony phase, obtained in titanium by means of computer-aided design combined with additive manufacture using selective laser sintering. The chondral phase is obtained by sugar leaching, using a PDMS matrix and sugar as porogen, and is joined to the bony phase during the polymerization of PDMS, therefore avoiding the use of supporting adhesives or additional intermediate layers. The mechanical performance of the construct is biomimetic and the stiffness values of the bony and chondral phases can be tuned to the desired applications, by means of controlled modifications of different parameters. A human mesenchymal stem cell (h-MSC) conditioned medium (CM) is used for improving scaffold response. Cell culture results provide relevant information regarding the viability of the composite scaffolds used. Copyright © 2015 Elsevier B.V. All rights reserved.
Magnesium degradation observed in situ under flow by synchrotron radiation based microtomography
NASA Astrophysics Data System (ADS)
Feyerabend, Frank; Dose, Thomas; Xu, Yuling; Beckmann, Felix; Stekker, Michael; Willumeit-Römer, Regine; Schreyer, Andreas; Wilde, Fabian; Hammel, Jörg U.
2016-10-01
The use of degradable magnesium based implants is becoming clinically relevant, e.g. for the use as bone screws. Still there is a lack of analyzing techniques to characterize the in vitro degradation behavior of implant prototypes. The aim of this study was to design an in situ environment to continuously monitor the degradation processes under physiological conditions by time-lapse SRμCT. The use of physiological conditions was chosen to get a better approach to the in vivo situation, as it could be shown by many studies, that these conditions change on the one hand the degradation rate and on the other hand also the formed degradation products. The resulting in situ environment contains a closed bioreactor system to control and monitor the relevant parameters (37°C, 5 % O2, 20 % CO2) and to grant sterility of the setup. A flow cell was designed and manufactured from polyether etherketone (PEEK), which was chosen because of the good mechanical properties, high thermal and chemical resistance and radiographic translucency. Sterilization of the system including the sample was reached by a transient flush with 70 % ethanol and subsequent replacement by physiological medium (Modified Eagle Medium alpha). As proof of principle it could be shown that the system remained sterile during a beamtime of several days and that the continuous SRμCT imaging was feasible.
NASA Astrophysics Data System (ADS)
Russo, Maria; Bevilacqua, Paolo; Netti, Paolo Antonio; Torino, Enza
2016-11-01
Recent advancements in imaging diagnostics have focused on the use of nanostructures that entrap Magnetic Resonance Imaging (MRI) Contrast Agents (CAs), without the need to chemically modify the clinically approved compounds. Nevertheless, the exploitation of microfluidic platforms for their controlled and continuous production is still missing. Here, a microfluidic platform is used to synthesize crosslinked Hyaluronic Acid NanoParticles (cHANPs) in which a clinically relevant MRI-CAs, gadolinium diethylenetriamine penta-acetic acid (Gd-DTPA), is entrapped. This microfluidic process facilitates a high degree of control over particle synthesis, enabling the production of monodisperse particles as small as 35 nm. Furthermore, the interference of Gd-DTPA during polymer precipitation is overcome by finely tuning process parameters and leveraging the use of hydrophilic-lipophilic balance (HLB) of surfactants and pH conditions. For both production strategies proposed to design Gd-loaded cHANPs, a boosting of the relaxation rate T1 is observed since a T1 of 1562 is achieved with a 10 μM of Gd-loaded cHANPs while a similar value is reached with 100 μM of the relevant clinical Gd-DTPA in solution. The advanced microfluidic platform to synthesize intravascularly-injectable and completely biocompatible hydrogel nanoparticles entrapping clinically approved CAs enables the implementation of straightforward and scalable strategies in diagnostics and therapy applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy; English, Shawn; Briggs, Timothy
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
Testing of mechanical ventilators and infant incubators in healthcare institutions.
Badnjevic, Almir; Gurbeta, Lejla; Jimenez, Elvira Ruiz; Iadanza, Ernesto
2017-01-01
The medical device industry has grown rapidly and incessantly over the past century. The sophistication and complexity of the designed instrumentation is nowadays rising and, with it, has also increased the need to develop some better, more effective and efficient maintenance processes, as part of the safety and performance requirements. This paper presents the results of performance tests conducted on 50 mechanical ventilators and 50 infant incubators used in various public healthcare institutions. Testing was conducted in accordance to safety and performance requirements stated in relevant international standards, directives and legal metrology policies. Testing of output parameters for mechanical ventilators was performed in 4 measuring points while testing of output parameters for infant incubators was performed in 7 measuring points for each infant incubator. As performance criteria, relative error of output parameters for mechanical ventilators and absolute error of output parameters for infant incubators was calculated. The ranges of permissible error, for both groups of devices, are regulated by the Rules on Metrological and Technical Requirements published in the Official Gazette of Bosnia and Herzegovina No. 75/14, which are defined based on international recommendations, standards and guidelines. All ventilators and incubators were tested by etalons calibrated in an ISO 17025 accredited laboratory, which provides compliance to international standards for all measured parameters.The results show that 30% of the tested medical devices are not operating properly and should be serviced, recalibrated and/or removed from daily application.
Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto
2014-04-01
Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Langlands Parameters of Quivers in the Sato Grassmannian
NASA Astrophysics Data System (ADS)
Luu, Martin T.; Penciak, Matej
2018-01-01
Motivated by quantum field theoretic partition functions that can be expressed as products of tau functions of the KP hierarchy we attach several types of local geometric Langlands parameters to quivers in the Sato Grassmannian. We study related questions of Virasoro constraints, of moduli spaces of relevant quivers, and of classical limits of the Langlands parameters.
The Use of Logistics n the Quality Parameters Control System of Material Flow
ERIC Educational Resources Information Center
Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.
2016-01-01
The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…
NASA Astrophysics Data System (ADS)
Xiao, Shou-Ne; Wang, Ming-Meng; Hu, Guang-Zhong; Yang, Guang-Wu
2017-09-01
In view of the problem that it's difficult to accurately grasp the influence range and transmission path of the vehicle top design requirements on the underlying design parameters. Applying directed-weighted complex network to product parameter model is an important method that can clarify the relationships between product parameters and establish the top-down design of a product. The relationships of the product parameters of each node are calculated via a simple path searching algorithm, and the main design parameters are extracted by analysis and comparison. A uniform definition of the index formula for out-in degree can be provided based on the analysis of out-in-degree width and depth and control strength of train carriage body parameters. Vehicle gauge, axle load, crosswind and other parameters with higher values of the out-degree index are the most important boundary conditions; the most considerable performance indices are the parameters that have higher values of the out-in-degree index including torsional stiffness, maximum testing speed, service life of the vehicle, and so on; the main design parameters contain train carriage body weight, train weight per extended metre, train height and other parameters with higher values of the in-degree index. The network not only provides theoretical guidance for exploring the relationship of design parameters, but also further enriches the application of forward design method to high-speed trains.
Pfenniger, Alois; Obrist, Dominik; Stahel, Andreas; Koch, Volker M; Vogel, Rolf
2013-07-01
As the complexity of active medical implants increases, the task of embedding a life-long power supply at the time of implantation becomes more challenging. A periodic renewal of the energy source is often required. Human energy harvesting is, therefore, seen as a possible remedy. In this paper, we present a novel idea to harvest energy from the pressure-driven deformation of an artery by the principle of magneto-hydrodynamics. The generator relies on a highly electrically conductive fluid accelerated perpendicularly to a magnetic field by means of an efficient lever arm mechanism. An artery with 10 mm inner diameter is chosen as a potential implantation site and its ability to drive the generator is established. Three analytical models are proposed to investigate the relevant design parameters and to determine the existence of an optimal configuration. The predicted output power reaches 65 μW according to the first two models and 135 μW according to the third model. It is found that the generator, designed as a circular structure encompassing the artery, should not exceed a total volume of 3 cm³.
Microstructure design for fast oxygen conduction
Aidhy, Dilpuneet S.; Weber, William J.
2015-11-11
Research from the last decade has shown that in designing fast oxygen conducting materials for electrochemical applications has largely shifted to microstructural features, in contrast to material-bulk. In particular, understanding oxygen energetics in heterointerface materials is currently at the forefront, where interfacial tensile strain is being considered as the key parameter in lowering oxygen migration barriers. Nanocrystalline materials with high densities of grain boundaries have also gathered interest that could possibly allow leverage over excess volume at grain boundaries, providing fast oxygen diffusion channels similar to those previously observed in metals. In addition, near-interface phase transformations and misfit dislocations aremore » other microstructural phenomenon/features that are being explored to provide faster diffusion. In this review, the current understanding on oxygen energetics, i.e., thermodynamics and kinetics, originating from these microstructural features is discussed. Moreover, our experimental observations, theoretical predictions and novel atomistic mechanisms relevant to oxygen transport are highlighted. In addition, the interaction of dopants with oxygen vacancies in the presence of these new microstructural features, and their future role in the design of future fast-ion conductors, is outlined.« less
NASA Astrophysics Data System (ADS)
Viswanathan, V. K.
1982-02-01
This paper describes the need for non-raytracing schemes in the optical design and analysis of large carbon-dioxide lasers like the Gigawatt,1 Gemini, 2 and Helios3 lasers currently operational at Los Alamos, and the Antares 4 laser fusion system under construction. The scheme currently used at Los Alamos involves characterizing the various optical components with a Zernike polynomial sets obtained by the digitization6 of experimentally produced interferograms of the components. A Fast Fourier Transform code then propagates the complex amplitude and phase of the beam through the whole system and computes the optical parameters of interest. The analysis scheme is illustrated through examples of the Gigawatt, Gemini, and Helios systems. A possible way of using the Zernike polynomials in optical design problems of this type is discussed. Comparisons between the computed values and experimentally obtained results are made and it is concluded that this appears to be a valid approach. As this is a review article, some previously published results are also used where relevant.
Conceptual Design of Tail-Research EXperiment (T-REX) on Space Plasma Environment Research Facility
NASA Astrophysics Data System (ADS)
Xiao, Qingmei; Wang, Xiaogang; E, Peng; Shen, Chao; Wang, Zhibin; Mao, Aohua; Xiao, Chijie; Ding, Weixing; Ji, Hantao; Ren, Yang
2016-10-01
Space Environment Simulation Research Infrastructure (SESRI), a scientific project for a major national facility of fundamental researches, has recently been launched at Harbin Institute of Technology (HIT). The Space Plasma Environment Research Facility (SPERF) for simulation of space plasma environment is one of the components of SESRI. It is designed to investigate fundamental issues in space plasma environment, such as energetic particles transportation and the interaction with waves in magnetosphere, magnetic reconnection at magnetopause and magnetotail, etc. Tail-Research Experiment (T-REX) is part of the SPERF for laboratory studies of space physics relevant to tail reconnection and dipolarization process. T-REX is designed to carry out two kinds of experiments: the tail plasmamoid for magnetic reconnection and magnetohydrodynamic waves excited by high speed plasma jet. In this presentation, the scientific goals and experimental plans for T-REX together with the means applied to generate the plasma with desired parameters are reviewed. Two typical scenarios of T-REX with operations of plasma sources and various magnetic configurations to study specific physical processes in space plasmas will also be presented.
Business model design for a wearable biofeedback system.
Hidefjäll, Patrik; Titkova, Dina
2015-01-01
Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".
Gil-Rostra, Jorge; Ferrer, Francisco J; Espinós, Juan Pedro; González-Elipe, Agustín R; Yubero, Francisco
2017-05-17
A multilayer luminescent design concept is presented to develop energy-sensitive radiation-beam monitors on the basis of colorimetric analysis. Each luminescent layer within the stack consists of rare-earth-doped transparent oxides of optical quality and a characteristic luminescent emission under excitation with electron or ion beams. For a given type of particle beam (electron, protons, α particles, etc.), its penetration depth and therefore its energy loss at a particular buried layer within the multilayer stack depend on the energy of the initial beam. The intensity of the luminescent response of each layer is proportional to the energy deposited by the radiation beam within the layer, so characteristic color emission will be achieved if different phosphors are considered in the layers of the luminescent stack. Phosphor doping, emission efficiency, layer thickness, and multilayer structure design are key parameters relevant to achieving a broad colorimetric response. Two case examples are designed and fabricated to illustrate the capabilities of these new types of detector to evaluate the kinetic energy of either electron beams of a few kilo-electron volts or α particles of a few mega-electron volts.
NASA Astrophysics Data System (ADS)
Meng, Fei; Tao, Gang; Zhang, Tao; Hu, Yihuai; Geng, Peng
2015-08-01
Shifting quality is a crucial factor in all parts of the automobile industry. To ensure an optimal gear shifting strategy with best fuel economy for a stepped automatic transmission, the controller should be designed to meet the challenge of lacking of a feedback sensor to measure the relevant variables. This paper focuses on a new kind of automatic transmission using proportional solenoid valve to control the clutch pressure, a speed difference of the clutch based control strategy is designed for the shift control during the inertia phase. First, the mechanical system is shown and the system dynamic model is built. Second, the control strategy is designed based on the characterization analysis of models which are derived from dynamics of the drive line and electro-hydraulic actuator. Then, the controller uses conventional Proportional-Integral-Derivative control theory, and a robust two-degree-of-freedom controller is also carried out to determine the optimal control parameters to further improve the system performance. Finally, the designed control strategy with different controller is implemented on a simulation model. The compared results show that the speed difference of clutch can track the desired trajectory well and improve the shift quality effectively.
NASA Astrophysics Data System (ADS)
Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin
2017-04-01
Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more complex hillside is simulated by combining three components: a) a global trend (planar surface), b) local systematic components (sine waves), c) random roughness (Gaussian, zero-mean noise). The parameters for simulating these components are estimated for three typical scenarios of rockfall terrains: soft soil, fine scree and coarse scree, based on expert knowledge and available airborne and terrestrial laser scanning data. For each scenario, the reference terrain is created and used to compute input data for RockyFor3D simulations at different scales, i.e. DTMs with resolutions from 0.5 m to 20 m and associated roughness parameters. Subsequent analysis mainly focuses on the sensitivity of simulations both in terms of run-out envelope and kinetic energy distribution. Guidelines drawn from the results are expected to help experts handle the scale issue while integrating remote sensing data and field measurements of roughness in rockfall simulations.
Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl
2012-01-01
The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.
Role of κ→λ light-chain constant-domain switch in the structure and functionality of A17 reactibody
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponomarenko, Natalia; Chatziefthimiou, Spyros D.; Kurkova, Inna
2014-03-01
Catalytic antibody variants with κ and λ light-chain constant domains show differences in their crystal structures which lead to subtle changes in catalytic efficiency and thermodynamic parameters as well as in their affinity for peptide substrates. The engineering of catalytic function in antibodies requires precise information on their structure. Here, results are presented that show how the antibody domain structure affects its functionality. The previously designed organophosphate-metabolizing reactibody A17 has been re-engineered by replacing its constant κ light chain by the λ chain (A17λ), and the X-ray structure of A17λ has been determined at 1.95 Å resolution. It was foundmore » that compared with A17κ the active centre of A17λ is displaced, stabilized and made more rigid owing to interdomain interactions involving the CDR loops from the V{sub L} and V{sub H} domains. These V{sub L}/V{sub H} domains also have lower mobility, as deduced from the atomic displacement parameters of the crystal structure. The antibody elbow angle is decreased to 126° compared with 138° in A17κ. These structural differences account for the subtle changes in catalytic efficiency and thermodynamic parameters determined with two organophosphate ligands, as well as in the affinity for peptide substrates selected from a combinatorial cyclic peptide library, between the A17κ and A17λ variants. The data presented will be of interest and relevance to researchers dealing with the design of antibodies with tailor-made functions.« less
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
Luterbacher, Jeremy S; Moran-Mirabal, Jose M; Burkholder, Eric W; Walker, Larry P
2015-01-01
Enzymatic hydrolysis is one of the critical steps in depolymerizing lignocellulosic biomass into fermentable sugars for further upgrading into fuels and/or chemicals. However, many studies still rely on empirical trends to optimize enzymatic reactions. An improved understanding of enzymatic hydrolysis could allow research efforts to follow a rational design guided by an appropriate theoretical framework. In this study, we present a method to image cellulosic substrates with complex three-dimensional structure, such as filter paper, undergoing hydrolysis under conditions relevant to industrial saccharification processes (i.e., temperature of 50°C, using commercial cellulolytic cocktails). Fluorescence intensities resulting from confocal images were used to estimate parameters for a diffusion and reaction model. Furthermore, the observation of a relatively constant bound enzyme fluorescence signal throughout hydrolysis supported our modeling assumption regarding the structure of biomass during hydrolysis. The observed behavior suggests that pore evolution can be modeled as widening of infinitely long slits. The resulting model accurately predicts the concentrations of soluble carbohydrates obtained from independent saccharification experiments conducted in bulk, demonstrating its relevance to biomass conversion work. © 2014 Wiley Periodicals, Inc.
Synchronisation, acquisition and tracking for telemetry and data reception
NASA Astrophysics Data System (ADS)
Vandoninck, A.
1992-06-01
The important parameters of synchronization, acquisition, and tracking are addressed, and each function is highlighted separately. The following sequence is such as the functions occur in the system in time and for the type of data to be received, with distinction between telemetry and data reception, between direct carrier modulation or the use of a subcarrier, and between deep space and normal reception. For the telemetry reception the acquisition is described taking into account the difference in performances as geostationary or polar orbits, and the dependencies on the different Doppler offsets and rates are distinguished. The related functions and parameters are covered and the specifications of an average receiver are summarized. The synchronization of the valid data is described with a distinction for data directly modulated or via a subcarrier, the type of modulation and bitrate. The relevant functions and parameters of the average receiver/demodulator are summarized. The tracking of the signal in the course of the operational phase is described and relevant parameters of an actual system are presented. The reception of real data is handled and a sequence of acquisition, synchronization, and tracking is applied. Here higher bitrates and direct modulation schemes play an important role. The market equipment with the relevant parameters are discussed. The three functions in cases where deep reception is needed are covered. The high performance receiver/demodulator functions and how the acquisition, synchronization, and tracking is handled in such application, are explained.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
Production tolerance of additive manufactured polymeric objects for clinical applications.
Braian, Michael; Jimbo, Ryo; Wennerberg, Ann
2016-07-01
To determine the production tolerance of four commercially available additive manufacturing systems. By reverse engineering annex A and B from the ISO_12836;2012, two geometrical figures relevant to dentistry was obtained. Object A specifies the measurement of an inlay-shaped object and B a multi-unit specimen to simulate a four-unit bridge model. The objects were divided into x, y and z measurements, object A was divided into a total of 16 parameters and object B was tested for 12 parameters. The objects were designed digitally and manufactured by professionals in four different additive manufacturing systems; each system produced 10 samples of each objects. For object A, three manufacturers presented an accuracy of <100μm and one system showed an accuracy of <20μm. For object B, all systems presented an accuracy of <100μm, and most parameters were <40μm. The standard deviation for most parameters were <40μm. The growing interest and use of intra-oral digitizing systems stresses the use of computer aided manufacturing of working models. The additive manufacturing techniques has the potential to help us in the digital workflow. Thus, it is important to have knowledge about production accuracy and tolerances. This study presents a method to test additive manufacturing units for accuracy and repeatability. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Palma, J. L.; Belo-Pereira, M.; Leo, L. S.; Fernando, J.; Wildmann, N.; Gerz, T.; Rodrigues, C. V.; Lopes, A. S.; Lopes, J. C.
2017-12-01
Perdigão is the largest of a series of wind-mapping studies embedded in the on-going NEWA (New European Wind Atlas) Project. The intensive observational period of the Perdigão field experiment resulted in an unprecedented volume of data, covering several wind conditions through 46 consecutive days between May and June 2017. For researchers looking into specific events, it is time consuming to scrutinise the datasets looking for appropriate conditions. Such task becomes harder if the parameters of interest were not measured directly, instead requiring their computation from the raw datasets. This work will present the e-Science platform developed by University of Porto for the Perdigao dataset. The platform will assist scientists of Perdigao and the larger scientific community in extrapolating the datasets associated to specific flow regimes of interest as well as automatically performing post-processing/filtering operations internally in the platform. We will illustrate the flow regime categories identified in Perdigao based on several parameters such as weather type classification, cloud characteristics, as well as stability regime indicators (Brunt-Väisälä frequency, Scorer parameter, potential temperature inversion heights, dimensionless Richardson and Froude numbers) and wind regime indicators. Examples of some of the post-processing techniques available in the e-Science platform, such as the Savitzky-Golay low-pass filtering technique, will be also presented.
Stress-based animal models of depression: Do we actually know what we are doing?
Yin, Xin; Guven, Nuri; Dietis, Nikolas
2016-12-01
Depression is one of the leading causes of disability and a significant health-concern worldwide. Much of our current understanding on the pathogenesis of depression and the pharmacology of antidepressant drugs is based on pre-clinical models. Three of the most popular stress-based rodent models are the forced swimming test, the chronic mild stress paradigm and the learned helplessness model. Despite their recognizable advantages and limitations, they are associated with an immense variability due to the high number of design parameters that define them. Only few studies have reported how minor modifications of these parameters affect the model phenotype. Thus, the existing variability in how these models are used has been a strong barrier for drug development as well as benchmark and evaluation of these pre-clinical models of depression. It also has been the source of confusing variability in the experimental outcomes between research groups using the same models. In this review, we summarize the known variability in the experimental protocols, identify the main and relevant parameters for each model and describe the variable values using characteristic examples. Our view of depression and our efforts to discover novel and effective antidepressants is largely based on our detailed knowledge of these testing paradigms, and requires a sound understanding around the importance of individual parameters to optimize and improve these pre-clinical models. Copyright © 2016 Elsevier B.V. All rights reserved.
LIFESPAN: A tool for the computer-aided design of longitudinal studies
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Hertzog, Christopher; Lindenberger, Ulman
2015-01-01
Researchers planning a longitudinal study typically search, more or less informally, a multivariate space of possible study designs that include dimensions such as the hypothesized true variance in change, indicator reliability, the number and spacing of measurement occasions, total study time, and sample size. The main search goal is to select a research design that best addresses the guiding questions and hypotheses of the planned study while heeding applicable external conditions and constraints, including time, money, feasibility, and ethical considerations. Because longitudinal study selection ultimately requires optimization under constraints, it is amenable to the general operating principles of optimization in computer-aided design. Based on power equivalence theory (MacCallum et al., 2010; von Oertzen, 2010), we propose a computational framework to promote more systematic searches within the study design space. Starting with an initial design, the proposed framework generates a set of alternative models with equal statistical power to detect hypothesized effects, and delineates trade-off relations among relevant parameters, such as total study time and the number of measurement occasions. We present LIFESPAN (Longitudinal Interactive Front End Study Planner), which implements this framework. LIFESPAN boosts the efficiency, breadth, and precision of the search for optimal longitudinal designs. Its initial version, which is freely available at http://www.brandmaier.de/lifespan, is geared toward the power to detect variance in change as specified in a linear latent growth curve model. PMID:25852596
Mixing with applications to inertial-confinement-fusion implosions
NASA Astrophysics Data System (ADS)
Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Brocco, Monica
2013-10-01
When designing a complete system of daily-telerehabilitation it should be borne in mind that properly designed methodologies should be furnished for patients to execute specific motion tasks and for care givers to assess the relevant parameters. Whether in hospital or at home, the system should feature two basic elements: (a) instrumented and walking aids or supports, (b) equipment for the assessment of parameters. Being gait the focus, the idea was to design, construct and validate - as an alternative to the complex and expensive instruments currently used - a simple, portable kit that may be easily interfaced/integrated with the most common mechanical tools used in motion rehabilitation (instrumented walkways, aids, supports), with feedback to both patient for self-monitoring and trainer/therapist (present or remote) for clinical reporting. The proposed system consists of: one step-counter, three couples of photo-emitter detectors, one central unit for collecting and processing the telemetrically transmitted data; a software interface on a dedicated PC and a network adapter. The system has been successfully validated in a clinical application on two groups of 16 subjects at the 1st and 2nd level of the Tinetti test. The degree of acceptance by subjects and care-givers was high. The system was also successfully compared with an Inertial Measurement Unit, a de facto standard. The portable kit can be used with different rehabilitation tools and different ground rugosity. The advantages are: (a) very low costs when compared with optoelectronic solutions and other portable solutions; (b) very high accuracy, also for subjects with imbalance problems; (c) good compatibility with any rehabilitative tool. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Designing for Feel: Contrasts between Human and Automated Parametric Capture of Knob Physics.
Swindells, C; MacLean, K E; Booth, K S
2009-01-01
We examine a crucial aspect of a tool intended to support designing for feel: the ability of an objective physical-model identification method to capture perceptually relevant parameters, relative to human identification performance. The feel of manual controls, such as knobs, sliders, and buttons, becomes critical when these controls are used in certain settings. Appropriate feel enables designers to create consistent control behaviors that lead to improved usability and safety. For example, a heavy knob with stiff detents for a power plant boiler setting may afford better feedback and safer operations, whereas subtle detents in an automobile radio volume knob may afford improved ergonomics and driver attention to the road. To assess the quality of our identification method, we compared previously reported automated model captures for five real mechanical reference knobs with captures by novice and expert human participants who were asked to adjust four parameters of a rendered knob model to match the feel of each reference knob. Participants indicated their satisfaction with the matches their renderings produced. We observed similar relative inertia, friction, detent strength, and detent spacing parameterizations by human experts and our automatic estimation methods. Qualitative results provided insight on users' strategies and confidence. While experts (but not novices) were better able to ascertain an underlying model in the presence of unmodeled dynamics, the objective algorithm outperformed all humans when an appropriate physical model was used. Our studies demonstrate that automated model identification can capture knob dynamics as perceived by a human, and they also establish limits to that ability; they comprise a step towards pragmatic design guidelines for embedded physical interfaces in which methodological expedience is informed by human perceptual requirements.
Mixing with applications to inertial-confinement-fusion implosions.
Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
Integrating Design and Manufacturing for a High Speed Civil Transport Wing
NASA Technical Reports Server (NTRS)
Marx, William J.; Mavris, Dimitri N.; Schrage, Daniel P.
1994-01-01
The aerospace industry is currently addressing the problem of integrating design and manufacturing. Because of the difficulties associated with using conventional, procedural techniques and algorithms, it is the authors' belief that the only feasible way to integrate the two concepts is with the development of an appropriate Knowledge-Based System (KBS). The authors propose a methodology for an aircraft producibility assessment, including a KBS, that addresses both procedural and heuristic aspects of integrating design and manufacturing of a High Speed Civil Transport (HSCT) wing. The HSCT was chosen as the focus of this investigation since it is a current NASA/aerospace industry initiative full of technological challenges involving many disciplines. The paper gives a brief background of selected previous supersonic transport studies followed by descriptions of key relevant design and manufacturing methodologies. Georgia Tech's Concurrent Engineering/Integrated Product and Process Development methodology is discussed with reference to this proposed conceptual producibility assessment. Evaluation criteria are presented that relate pertinent product and process parameters to overall product producibility. In addition, the authors' integration methodology and reasons for selecting a KBS to integrate design and manufacturing are presented in this paper. Finally, a proposed KBS is given, as well as statements of future work and overall investigation objectives.
García-Betances, Rebeca I.; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T.
2016-01-01
Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users’ real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Sundaramurthy, Aravind; Chandra, Namas
2014-01-01
Detonation of a high-explosive produces shock-blast wave, shrapnel, and gaseous products. While direct exposure to blast is a concern near the epicenter, shock-blast can affect subjects, even at farther distances. When a pure shock-blast wave encounters the subject, in the absence of shrapnels, fall, or gaseous products the loading is termed as primary blast loading and is the subject of this paper. The wave profile is characterized by blast overpressure, positive time duration, and impulse and called herein as shock-blast wave parameters (SWPs). These parameters in turn are uniquely determined by the strength of high explosive and the distance of the human subjects from the epicenter. The shape and magnitude of the profile determine the severity of injury to the subjects. As shown in some of our recent works (1–3), the profile not only determines the survival of the subjects (e.g., animals) but also the acute and chronic biomechanical injuries along with the following bio-chemical sequelae. It is extremely important to carefully design and operate the shock tube to produce field-relevant SWPs. Furthermore, it is vital to identify and eliminate the artifacts that are inadvertently introduced in the shock-blast profile that may affect the results. In this work, we examine the relationship between shock tube adjustable parameters (SAPs) and SWPs that can be used to control the blast profile; the results can be easily applied to many of the laboratory shock tubes. Further, replication of shock profile (magnitude and shape) can be related to field explosions and can be a standard in comparing results across different laboratories. Forty experiments are carried out by judiciously varying SAPs such as membrane thickness, breech length (66.68–1209.68 mm), measurement location, and type of driver gas (nitrogen, helium). The effects SAPs have on the resulting shock-blast profiles are shown. Also, the shock-blast profiles of a TNT explosion from ConWep software is compared with the profiles obtained from the shock tube. To conclude, our experimental results demonstrate that a compressed-gas shock tube when designed and operated carefully can replicate the blast time profiles of field explosions accurately. Such a faithful replication is an essential first step when studying the effects of blast induced neurotrauma using animal models. PMID:25520701
Cytometric analysis of retinopathies in retinal trypsin digests
NASA Astrophysics Data System (ADS)
Ghanian, Zahra; Staniszewski, Kevin; Sorenson, Christine M.; Sheibani, Nader; Ranji, Mahsa
2014-03-01
The objective of this work was to design an automated image cytometry tool for determination of various retinal vascular parameters including extraction of features that are relevant to postnatal retinal vascular development, and the progression of diabetic retinopathy. To confirm the utility and accuracy of the software, retinal trypsin digest from TSP1-/- and diabetic Akita/+; TSP1-/- mice were analyzed. TSP1 is a critical inhibitor of development of retinopathies and lack of TSP1 exacerbates progression of early diabetic retinopathies. Loss of vascular cells of and gain more acellular capillaries as two major signs of diabetic retinopathies were used to classify a retina as normal or injured. This software allows quantification and high throughput assessment of retinopathy changes associated with diabetes.
NASA Astrophysics Data System (ADS)
Adhikari, Ramesh; Bhattacharya, Aniket; Dogariu, Aristide
We study in silico the properties of a gel consisting of DNA strands (modeled as semi-flexible chains) and linkers of varying flexibility, length, and topology. These linkers are envisioned and modeled as active components with additional attributes so as to mimic properties of a synthetic DNA gel containing motor proteins. We use Brownian dynamics to directly obtain frequency dependent complex shear moduli of the gel. We further carry out force spectroscopy on these computer generated gels and study the relaxation properties as a function of the important parameters of the model, e.g., densities and relative ratios of the DNAs and the linkers, the average life time of a link, etc. Our studies are relevant for designing synthetic bio-materials for both materials and medical applications.
Quantum thermostatted disordered systems and sensitivity under compression
NASA Astrophysics Data System (ADS)
Vanzan, Tommaso; Rondoni, Lamberto
2018-03-01
A one-dimensional quantum system with off diagonal disorder, consisting of a sample of conducting regions randomly interspersed within potential barriers is considered. Results mainly concerning the large N limit are presented. In particular, the effect of compression on the transmission coefficient is investigated. A numerical method to simulate such a system, for a physically relevant number of barriers, is proposed. It is shown that the disordered model converges to the periodic case as N increases, with a rate of convergence which depends on the disorder degree. Compression always leads to a decrease of the transmission coefficient which may be exploited to design nano-technological sensors. Effective choices for the physical parameters to improve the sensitivity are provided. Eventually large fluctuations and rate functions are analysed.
Knowledge acquisition for temporal abstraction.
Stein, A; Musen, M A; Shahar, Y
1996-01-01
Temporal abstraction is the task of detecting relevant patterns in data over time. The knowledge-based temporal-abstraction method uses knowledge about a clinical domain's contexts, external events, and parameters to create meaningful interval-based abstractions from raw time-stamped clinical data. In this paper, we describe the acquisition and maintenance of domain-specific temporal-abstraction knowledge. Using the PROTEGE-II framework, we have designed a graphical tool for acquiring temporal knowledge directly from expert physicians, maintaining the knowledge in a sharable form, and converting the knowledge into a suitable format for use by an appropriate problem-solving method. In initial tests, the tool offered significant gains in our ability to rapidly acquire temporal knowledge and to use that knowledge to perform automated temporal reasoning.
NASA Technical Reports Server (NTRS)
Hart, John E.
1996-01-01
Experiments designed to study the fluid dynamics of buoyancy driven circulations in rotating spherical shells were conducted on the United States Microgravity Laboratory 2 spacelab mission. These experiments address several aspects of prototypical global convection relevant to large scale motions on the Sun, Earth, and on the giant planets. The key feature is the consistent modeling of radially directed gravity in spherical geometry by using dielectric polarization forces. Imagery of the planforms of thermally driven flows for rapidly-rotating regimes shows an initial separation and eventual merger of equatorial and polar convection as the heating (i.e. the Rayleigh number) is increased. At low rotation rates, multiple-states of motion for the same external parameters were observed.
Experimental Evaluation of a Water Shield for a Surface Power Reactor
NASA Technical Reports Server (NTRS)
Pearson, J. B.; Reid, R.; Sadasivan, P.; Stewart, E.
2007-01-01
A water based shielding system is being investigated for use on initial lunar surface power systems. The use of water may lower overall cost (as compared to development cost for other materials) and simplify operations in the setup and handling. The thermal hydraulic performance of the shield is of significant interest. The mechanism for transferring heat through the shield is natural convection. A representative lunar surface reactor design is evaluated at various power levels in the Water Shield Testbed (WST) at the NASA Marshall Space Flight Center. The evaluation compares the experimental data from the WST to CFD models. Performance of a water shield on the lunar surface is predicted by CFD models anchored to test data, and by matching relevant dimensionless parameters.
Contact tracing of tuberculosis: a systematic review of transmission modelling studies.
Begun, Matt; Newall, Anthony T; Marks, Guy B; Wood, James G
2013-01-01
The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy.
Contact Tracing of Tuberculosis: A Systematic Review of Transmission Modelling Studies
Begun, Matt; Newall, Anthony T.; Marks, Guy B.; Wood, James G.
2013-01-01
The WHO recommended intervention of Directly Observed Treatment, Short-course (DOTS) appears to have been less successful than expected in reducing the burden of TB in some high prevalence settings. One strategy for enhancing DOTS is incorporating active case-finding through screening contacts of TB patients as widely used in low-prevalence settings. Predictive models that incorporate population-level effects on transmission provide one means of predicting impacts of such interventions. We aim to identify all TB transmission modelling studies addressing contact tracing and to describe and critically assess their modelling assumptions, parameter choices and relevance to policy. We searched MEDLINE, SCOPUS, COMPENDEX, Google Scholar and Web of Science databases for relevant English language publications up to February 2012. Of the 1285 studies identified, only 5 studies met our inclusion criteria of models of TB transmission dynamics in human populations designed to incorporate contact tracing as an intervention. Detailed implementation of contact processes was only present in two studies, while only one study presented a model for a high prevalence, developing world setting. Some use of relevant data for parameter estimation was made in each study however validation of the predicted impact of interventions was not attempted in any of the studies. Despite a large body of literature on TB transmission modelling, few published studies incorporate contact tracing. There is considerable scope for future analyses to make better use of data and to apply individual based models to facilitate more realistic patterns of infectious contact. Combined with a focus on high burden settings this would greatly increase the potential for models to inform the use of contract tracing as a TB control policy. Our findings highlight the potential for collaborative work between clinicians, epidemiologists and modellers to gather data required to enhance model development and validation and hence better inform future public health policy. PMID:24023742
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.
Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module
NASA Astrophysics Data System (ADS)
Deepak, SHARMA; Paritosh, CHAUDHURI
2018-04-01
The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.
Method for acquiring, storing and analyzing crystal images
NASA Technical Reports Server (NTRS)
Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)
2003-01-01
A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.
Young, Jared W; Markou, Athina
2015-09-01
Amotivation and reward-processing deficits have long been described in patients with schizophrenia and considered large contributors to patients' inability to integrate well in society. No effective treatments exist for these symptoms, partly because the neuromechanisms mediating such symptoms are poorly understood. Here, we propose a translational neuroscientific approach that can be used to assess reward/motivational deficits related to the negative symptoms of schizophrenia using behavioral paradigms that can also be conducted in experimental animals. By designing and using objective laboratory behavioral tools that are parallel in their parameters in rodents and humans, the neuromechanisms underlying behaviors with relevance to these symptoms of schizophrenia can be investigated. We describe tasks that measure the motivation of rodents to expend physical and cognitive effort to gain rewards, as well as probabilistic learning tasks that assess both reward learning and feedback-based decision making. The latter tasks are relevant because of demonstrated links of performance deficits correlating with negative symptoms in patients with schizophrenia. These tasks utilize operant techniques in order to investigate neural circuits targeting a specific domain across species. These tasks therefore enable the development of insights into altered mechanisms leading to negative symptom-relevant behaviors in patients with schizophrenia. Such findings will then enable the development of targeted treatments for these altered neuromechanisms and behaviors seen in schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Panescu, Dorin; Nerheim, Max; Kroll, Mark
2013-01-01
TASER(®) conducted electrical weapons (CEW) deliver electrical pulses that can inhibit a person's neuromuscular control or temporarily incapacitate. TASER X26, X26P, and X2 are among CEW models most frequently deployed by law enforcement agencies. The X2 CEW uses two cartridge bays while the X26 and X26P CEWs have only one. The TASER X26P CEW electronic output circuit design is equivalent to that of any one of the two TASER X2 outputs. The goal of this paper was to analyze the nominal electrical outputs of TASER X26, X26P, and X2 CEWs in reference to provisions of several international standards that specify safety requirements for electrical medical devices and electrical fences. Although these standards do not specifically mention CEWs, they are the closest electrical safety standards and hence give very relevant guidance. The outputs of two TASER X26 and two TASER X2 CEWs were measured and confirmed against manufacturer and other published specifications. The TASER X26, X26P, and X2 CEWs electrical output parameters were reviewed against relevant safety requirements of UL 69, IEC 60335-2-76 Ed 2.1, IEC 60479-1, IEC 60479-2, AS/NZS 60479.1, AS/NZS 60479.2 and IEC 60601-1. Prior reports on similar topics were reviewed as well. Our measurements and analyses confirmed that the nominal electrical outputs of TASER X26, X26P and X2 CEWs lie within safety bounds specified by relevant requirements of the above standards.
The medical genetics workforce: an analysis of clinical geneticist subgroups.
Cooksey, Judith A; Forte, Gaetano; Flanagan, Patricia A; Benkendorf, Judith; Blitzer, Miriam G
2006-10-01
Clinical geneticists with a Doctor of Medicine degree face challenges to meet the growing population demand for genetic services. This study was designed to assist the profession with workforce planning by identifying clinically relevant subgroups of geneticists and describing their professional characteristics and clinical practices. Geneticists' patient care productivity is compared across subgroups and other medical specialists. Part of a comprehensive national study of genetic services and the health workforce, this study uses data from a 2003 survey of geneticists certified by the American Board of Medical Genetics. This study includes 610 clinical geneticists who spend at least 5% of their time in direct patient-care services. An iterative approach was used to identify five subgroups based on the types of new patients seen. We conducted a descriptive analysis of subgroups by demographic, training, professional, and practice characteristics. The subgroups include general (36%), pediatric (28%), reproductive (15%), metabolic (14%), and adult (7%) geneticists. Clinically relevant variations across subgroups were noted in training, professional, and practice parameters. Subgroups vary across patient care hours (median, 15-33 hours/week) and total weekly work hours (52-60 hours). New patient visits (mean, 222-900/year) are higher than follow-up patient visits (mean, 155-405) for all subgroups except metabolic geneticists. Although many geneticists practice as generalist geneticists, this study provides an evidence base for distinguishing clinically relevant subgroups of geneticists. Geneticists provide similar numbers of new patient visits and far fewer follow-up visits than other medical specialists. These findings are relevant to geneticist workforce planning.
Diverter Decision Aiding for In-Flight Diversions
NASA Technical Reports Server (NTRS)
Rudolph, Frederick M.; Homoki, David A.; Sexton, George A.
1990-01-01
It was determined that artificial intelligence technology can provide pilots with the help they need in making the complex decisions concerning en route changes in a flight plan. A diverter system should have the capability to take all of the available information and produce a recommendation to the pilot. Phase three illustrated that using Joshua to develop rules for an expert system and a Statice database provided additional flexibility by permitting the development of dynamic weighting of diversion relevant parameters. This increases the fidelity of the AI functions cited as useful in aiding the pilot to perform situational assessment, navigation rerouting, flight planning/replanning, and maneuver execution. Additionally, a prototype pilot-vehicle interface (PVI) was designed providing for the integration of both text and graphical based information. Advanced technologies were applied to PVI design, resulting in a hierarchical menu based architecture to increase the efficiency of information transfer while reducing expected workload. Additional efficiency was gained by integrating spatial and text displays into an integrated user interface.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Analysis and Numerical Simulation of EWOD of a Droplet for Application in a Variable Focus Microlens
NASA Astrophysics Data System (ADS)
Chang, Yuan-Jen; Mohseni, Kamran; Bright, Victor
2006-11-01
Modification of the curvature of the interface between a conductive (water) and isolating (oil) liquids is used in order to design a tunable microlens. Electrowetting on Dielectric (EWOD), the modification of surface energy of a conductive droplet on an isolated electrode, is employed in order to change the interface curvature and tune the microlens. Several features of the microlens design are addressed. These includes: the drop-centering mechanism, matching of the density of the two immiscible liquids, refractive indexes of the two liquids, and planar electrodes for electrowetting. A dimensional analysis is performed to identify the relevant nondimensional parameters. Direct numerical simulation of the hydrodynamic and electric fields is carried out. It is found that the focal length of the microlens changes continuously from negative to positive by applying a voltage from 0 to 200 volts. The focusing speed of the microlens is calculated to be around 10 milli-seconds. A successfully fabricated microlens device has been demonstrated.
Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don
2015-10-01
Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Skyrmion domain wall collision and domain wall-gated skyrmion logic
NASA Astrophysics Data System (ADS)
Xing, Xiangjun; Pong, Philip W. T.; Zhou, Yan
2016-08-01
Skyrmions and domain walls are significant spin textures of great technological relevance to magnetic memory and logic applications, where they can be used as carriers of information. The unique topology of skyrmions makes them display emergent dynamical properties as compared with domain walls. Some studies have demonstrated that the two topologically inequivalent magnetic objects could be interconverted by using cleverly designed geometric structures. Here, we numerically address the skyrmion domain wall collision in a magnetic racetrack by introducing relative motion between the two objects based on a specially designed junction. An electric current serves as the driving force that moves a skyrmion toward a trapped domain wall pair. We see different types of collision dynamics depending on the driving parameters. Most importantly, the modulation of skyrmion transport using domain walls is realized in this system, allowing a set of domain wall-gated logical NOT, NAND, and NOR gates to be constructed. This work provides a skyrmion-based spin-logic architecture that is fully compatible with racetrack memories.
European Long-Term Care Programs: Lessons for Community Living Assistance Services and Supports?
Nadash, Pamela; Doty, Pamela; Mahoney, Kevin J; von Schwanenflugel, Matthias
2012-01-01
Objective To uncover lessons from abroad for Community Living Assistance Services and Supports (CLASS), a federally run voluntary public long-term care (LTC) insurance program created under the Accountable Care Act of 2010. Data Sources Program administrators and policy researchers from Austria, England, France, Germany, and the Netherlands. Study Design Qualitative methods focused on key parameters of cash for care: how programs set benefit levels; project expenditures; control administrative costs; regulate the use of benefits; and protect workers. Data Collection/Extraction Methods Structured discussions were conducted during an international conference of LTC experts, followed by personal meetings and individual correspondence. Principal Findings Germany's self-financing mandate and tight targeting of benefits have resulted in a solvent program with low premiums. Black markets for care are likely in the absence of regulation; France addresses this via a unique system ensuing legal payment of workers. Conclusions Programs in the five countries studied have lessons, both positive and negative, relevant to CLASS design. PMID:22091672
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Tyne, William; Lofts, Stephen; Spurgeon, David J; Jurkschat, Kerstin; Svendsen, Claus
2013-08-01
A new toxicity test medium for Caenorhabditis elegans is presented. The test solution is designed to provide a better representation of natural soil pore water conditions than currently available test media. The medium has a composition that can readily be modified to allow for studies of the influences of a range of environmentally relevant parameters on nematode biology and toxicology. Tests conducted in the new medium confirmed that nematodes' reproduction was possible at a range of solution pH levels, offering the potential to conduct toxicity studies under a variety of conditions. A test to establish silver nanoparticle and dissolved silver nitrate toxicity, a study type not feasible in M9 or agar media due to precipitation and nanoparticle agglomeration, indicated lower silver nanoparticle (median effective concentration [EC50] of 6.5 mg Ag/L) than silver nitrate (EC50 0.28 mg Ag/L) toxicity. Characterization identified stable nanoparticle behavior in the new test medium. Copyright © 2013 SETAC.
On the dispersion characteristics of metamaterial transmission lines
NASA Astrophysics Data System (ADS)
Sisó, G.; Gil, M.; Bonache, J.; Martín, F.
2007-10-01
In this paper, a detailed analysis of the dispersion characteristics of metamaterial transmission lines, based on the lumped element T-circuit model is carried out. One of the main relevant characteristics of these artificial lines is the possibility to tailor the phase response. This leads to unique properties which are of interest for microwave circuit design, such as bandwidth enhancement or multiband (dual-band) operation, among others. However, it is shown in this paper that, in spite of the larger number of circuit parameters (as compared to conventional lines), there exist intrinsic limitations that may limit the performance of such metamaterial transmission lines under certain conditions. In this paper these limitations are pointed out from an accurate analysis of the phase response and the Foster's reactance theorem [Bell Syst. Tech. 3, 259 (1924)]. From the results of this paper, important guidelines for the design of microwave components based on metamaterial transmission lines are inferred. The fabrication and characterization of different metamaterial transmission lines will corroborate the theoretical results.
Shin, Jae-Won; Mooney, David J
2016-10-25
Extracellular matrix stiffness influences biological functions of some tumors. However, it remains unclear how cancer subtypes with different oncogenic mutations respond to matrix stiffness. In addition, the relevance of matrix stiffness to in vivo tumor growth kinetics and drug efficacy remains elusive. Here, we designed 3D hydrogels with physical parameters relevant to hematopoietic tissues and adapted them to a quantitative high-throughput screening format to facilitate mechanistic investigations into the role of matrix stiffness on myeloid leukemias. Matrix stiffness regulates proliferation of some acute myeloid leukemia types, including MLL-AF9 + MOLM-14 cells, in a biphasic manner by autocrine regulation, whereas it decreases that of chronic myeloid leukemia BCR-ABL + K-562 cells. Although Arg-Gly-Asp (RGD) integrin ligand and matrix softening confer resistance to a number of drugs, cells become sensitive to drugs against protein kinase B (PKB or AKT) and rapidly accelerated fibrosarcoma (RAF) proteins regardless of matrix stiffness when MLL-AF9 and BCR-ABL are overexpressed in K-562 and MOLM-14 cells, respectively. By adapting the same hydrogels to a xenograft model of extramedullary leukemias, we confirm the pathological relevance of matrix stiffness in growth kinetics and drug sensitivity against standard chemotherapy in vivo. The results thus demonstrate the importance of incorporating 3D mechanical cues into screening for anticancer drugs.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotte, F.P.; Doughty, C.; Birkholzer, J.
2010-11-01
The ability to reliably predict flow and transport in fractured porous rock is an essential condition for performance evaluation of geologic (underground) nuclear waste repositories. In this report, a suite of programs (TRIPOLY code) for calculating and analyzing flow and transport in two-dimensional fracture-matrix systems is used to model single-well injection-withdrawal (SWIW) tracer tests. The SWIW test, a tracer test using one well, is proposed as a useful means of collecting data for site characterization, as well as estimating parameters relevant to tracer diffusion and sorption. After some specific code adaptations, we numerically generated a complex fracture-matrix system for computationmore » of steady-state flow and tracer advection and dispersion in the fracture network, along with solute exchange processes between the fractures and the porous matrix. We then conducted simulations for a hypothetical but workable SWIW test design and completed parameter sensitivity studies on three physical parameters of the rock matrix - namely porosity, diffusion coefficient, and retardation coefficient - in order to investigate their impact on the fracture-matrix solute exchange process. Hydraulic fracturing, or hydrofracking, is also modeled in this study, in two different ways: (1) by increasing the hydraulic aperture for flow in existing fractures and (2) by adding a new set of fractures to the field. The results of all these different tests are analyzed by studying the population of matrix blocks, the tracer spatial distribution, and the breakthrough curves (BTCs) obtained, while performing mass-balance checks and being careful to avoid some numerical mistakes that could occur. This study clearly demonstrates the importance of matrix effects in the solute transport process, with the sensitivity studies illustrating the increased importance of the matrix in providing a retardation mechanism for radionuclides as matrix porosity, diffusion coefficient, or retardation coefficient increase. Interestingly, model results before and after hydrofracking are insensitive to adding more fractures, while slightly more sensitive to aperture increase, making SWIW tests a possible means of discriminating between these two potential hydrofracking effects. Finally, we investigate the possibility of inferring relevant information regarding the fracture-matrix system physical parameters from the BTCs obtained during SWIW testing.« less
Modelling of tunnelling processes and rock cutting tool wear with the particle finite element method
NASA Astrophysics Data System (ADS)
Carbonell, Josep Maria; Oñate, Eugenio; Suárez, Benjamín
2013-09-01
Underground construction involves all sort of challenges in analysis, design, project and execution phases. The dimension of tunnels and their structural requirements are growing, and so safety and security demands do. New engineering tools are needed to perform a safer planning and design. This work presents the advances in the particle finite element method (PFEM) for the modelling and the analysis of tunneling processes including the wear of the cutting tools. The PFEM has its foundation on the Lagrangian description of the motion of a continuum built from a set of particles with known physical properties. The method uses a remeshing process combined with the alpha-shape technique to detect the contacting surfaces and a finite element method for the mechanical computations. A contact procedure has been developed for the PFEM which is combined with a constitutive model for predicting the excavation front and the wear of cutting tools. The material parameters govern the coupling of frictional contact and wear between the interacting domains at the excavation front. The PFEM allows predicting several parameters which are relevant for estimating the performance of a tunnelling boring machine such as wear in the cutting tools, the pressure distribution on the face of the boring machine and the vibrations produced in the machinery and the adjacent soil/rock. The final aim is to help in the design of the excavating tools and in the planning of the tunnelling operations. The applications presented show that the PFEM is a promising technique for the analysis of tunnelling problems.
Correction of confounding bias in non-randomized studies by appropriate weighting.
Schmoor, Claudia; Gall, Christine; Stampf, Susanne; Graf, Erika
2011-03-01
In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mateo, B; Porcar-Seder, R; Solaz, J S; Dürsteler, J C
2010-07-01
This study demonstrates that appropriate measurement procedures can detect differences in head movement in a near reading task when using three different progressive addition lenses (PALs). The movements were measured using an anatomical reference system with a biomechanical rationale. This reference system was capable of representing rotations for comparing head flexion relative to trunk, head flexion relative to neck, head rotation relative to trunk and trunk flexion. The subject sample comprised 31 volunteers and three PAL designs with different viewing zones were selected. Significant differences were found between the lenses for three of the seven movement parameters examined. The differences occurred for both vertical and horizontal head movements and could be attributed to aspects of the PAL design. The measurement of the complete kinematic trunk-neck-head chain improved the number of differences that were found over those in previous studies. STATEMENT OF RELEVANCE: The study proposes a methodology based on a biomechanical rationale able to differentiate head-neck-trunk posture and movements caused by different progressive addition lens designs with minimum invasiveness. This methodology could also be applied to analyse the ergonomics of other devices that restrict the user's field of view, such as helmets, personal protective equipment or helmet-mounted displays for pilots. This analysis will allow designers to optimise designs offering higher comfort and performance.
A mainstream monitoring system for respiratory CO2 concentration and gasflow.
Yang, Jiachen; Chen, Bobo; Burk, Kyle; Wang, Haitao; Zhou, Jianxiong
2016-08-01
Continuous respiratory gas monitoring is an important tool for clinical monitoring. In particular, measurement of respiratory [Formula: see text] concentration and gasflow can reflect the status of a patient by providing parameters such as volume of carbon dioxide, end-tidal [Formula: see text] respiratory rate and alveolar deadspace. However, in the majority of previous work, [Formula: see text] concentration and gasflow have been studied separately. This study focuses on a mainstream system which simultaneously measures respiratory [Formula: see text] concentration and gasflow at the same location, allowing for volumetric capnography to be implemented. A non-dispersive infrared monitor is used to measure [Formula: see text] concentration and a differential pressure sensor is used to measure gasflow. In developing this new device, we designed a custom airway adapter which can be placed in line with the breathing circuit and accurately monitor relevant respiratory parameters. Because the airway adapter is used both for capnography and gasflow, our system reduces mechanical deadspace. The finite element method was used to design the airway adapter which can provide a strong differential pressure while reducing airway resistance. Statistical analysis using the coefficient of variation was performed to find the optimal driving voltage of the pressure transducer. Calibration between variations and flows was used to avoid pressure signal drift. We carried out targeted experiments using the proposed device and confirmed that the device can produce stable signals.
Zhou, Bin; Zhao, Bin
2014-01-01
It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs), a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF) and potential impact fraction (PIF) of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.
Guittonny-Philippe, Anna; Masotti, Véronique; Höhener, Patrick; Boudenne, Jean-Luc; Viglione, Julien; Laffont-Schwob, Isabelle
2014-03-01
In the Mediterranean area, surface waters often have low discharge or renewal rates, hence metal contamination from industrialised catchments can have a high negative impact on the physico-chemical and biological water quality. In a context of climate and anthropological changes, it is necessary to provide an integrative approach for the prevention and control of metal pollution, in order to limit its impact on water resources, biodiversity, trophic network and human health. For this purpose, introduction of constructed wetlands (CWs) between natural aquatic ecosystems and industrialised zones or catchments is a promising strategy for eco-remediation. Analysis of the literature has shown that further research must be done to improve CW design, selection and management of wetland plant species and catchment organisation, in order to ensure the effectiveness of CWs in Mediterranean environments. Firstly, the parameters of basin design that have the greatest influence on metal removal processes must be identified, in order to better focus rhizospheric processes on specific purification objectives. We have summarised in a single diagram the relationships between the design parameters of a CW basin and the physico-chemical and biological processes of metal removal, on the basis of 21 mutually consistent papers. Secondly, in order to optimise the selection and distribution of helophytes in CWs, it is necessary to identify criteria of choice for the plant species that will best fit the remediation objectives and environmental and economic constraints. We have analysed the factors determining plant metal uptake efficiency in CWs on the basis of a qualitative meta-analysis of 13 studies with a view to determine whether the part played by metal uptake by plants is relevant in comparison with the other removal processes. Thirdly, we analysed the parameters to consider for establishing suitable management strategies for CWs and how they affect the whole CW design process. Finally, we propose monitoring and policy measures to facilitate the integration of CWs within Mediterranean industrialised catchments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Heynen, Miriam; Kay, Lise M.M.; Dominici, Claudia Yvette; Khan, Warda; Ng, Wendy W.S.; Jones, Lyndon
2011-01-01
Purpose To characterize various properties of a physiologically-relevant artificial tear solution (ATS) containing a range of tear film components within a complex salt solution, and to measure contact lens parameters and lipid deposition of a variety of contact lens materials after incubation in this ATS. Methods A complex ATS was developed that contains a range of salts, proteins, lipids, mucin, and other tear film constituents in tear-film relevant concentrations. This ATS was tested to confirm that its pH, osmolality, surface tension, and homogeneity are similar to human tears and remain so throughout the material incubation process, for up to 4 weeks. To confirm that silicone hydrogel and conventional hydrogel contact lens materials do not alter in physical characteristics beyond what is allowed by the International Organization for Standardization (ISO) 18369–2. The diameter, center thickness, and calculated base curve were measured for five different lens materials directly out of the blister pack, after a rinse in saline and then following a two week incubation in the modified ATS. To test the ATS and the effect of its composition on lipid deposition, two lens materials were incubated in the ATS and a modified version for several time points. Both ATS solutions contained trace amounts of carbon-14 cholesterol and phosphatidylcholine, such that deposition of these specific lipids could be quantified using standard methods. Results This ATS is a complex mixture that remains stable at physiologically relevant pH (7.3–7.6), osmolality (304–306 mmol/kg), surface tension (40–46 dynes/cm) and homogeneity over an incubation period of three weeks or more. The physical parameters of the lenses tested showed no changes beyond that allowed by the ISO guidelines. Incubations with the ATS found that balafilcon A lenses deposit significantly more cholesterol and phosphatidylcholine than omafilcon A lenses (p<0.05) and that removing lactoferrin and immunoglobulin G from the ATS can significantly decrease the mass of lipid deposited. Conclusions This paper describes a novel complex artificial tear solution specially designed for in-vial incubation of contact lens materials. This solution was stable and did not adversely affect the physical parameters of the soft contact lenses incubated within it and showed that lipid deposition was responsive to changes in ATS composition. PMID:22219635
NASA Astrophysics Data System (ADS)
Ounoughene, G.; LeBihan, O.; Debray, B.; Chivas-Joly, C.; Longuet, C.; Joubert, A.; Lopez-Cuesta, J.-M.; Le Coq, L.
2017-06-01
Considering the wide use and production of NMs since last two decades, these trendy nanomaterials (NMs) are expected to end up in thermal disposal and waste incineration plants (WIP). It seems relevant to assess the risks related to the thermal disposal and incineration of waste containing NMs (WCNMs). The objective of this work is to present a first approach to develop a preliminary methodology for risk management in order (1) to give insights on nanosafety of exposed operators and on potential environmental risks related to the incineration and thermal disposal of WCNMs, and (2) to eventually support decision-makers and incineration plant managers. Therefore, the main challenge is to find (a) key parameter(s) which would govern the decision related to risk management of NMs thermal disposal. On the one hand, we focused on the relevant literature studies about experimental works on incineration of NMs. On the other hand, we conducted an introductory discussion with a group of experts. The review of this literature highlights that the nano-object’s nanostructure destruction appears as a relevant indicator of the risks related to the NMs incineration. As a consequence, we defined a “temperature of nanostructure destruction” (TND) which would be the temperature from which the nanostructure will be destroyed. This parameter has been assumed to be a consistent indicator to develop a preliminary methodology. If the combustion chamber temperature is higher than the TND of the NM (or if they are close to each other), then the nanostructure will be destroyed and no risks related to NMs remain. If the TND of the NMs is higher than the combustion chamber temperature, then the nanostructure will not be destroyed and risks related to NMs have to be considered. As a result, five groups of NMs have been identified. WCNMs including carbonic NMs appear to be in good position to be destroyed safely in WIP. On the other hand, based on this criterion, there would be no available thermal disposal plants to safely manage WCNMs including CeO2 and ZrO2. Finally, a decision tree has been designed. TND is used as criteria to assess if a waste can be managed safely or not by a specific thermal disposal and which safety measures have to be taken.
Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2015-01-01
Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797
Computational design of short pulse laser driven iron opacity experiments
Martin, M. E.; London, R. A.; Goluoglu, S.; ...
2017-02-23
Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less
Computational design of short pulse laser driven iron opacity experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M. E.; London, R. A.; Goluoglu, S.
Here, the resolution of current disagreements between solar parameters calculated from models and observations would benefit from the experimental validation of theoretical opacity models. Iron's complex ionic structure and large contribution to the opacity in the radiative zone of the sun make iron a good candidate for validation. Short pulse lasers can be used to heat buried layer targets to plasma conditions comparable to the radiative zone of the sun, and the frequency dependent opacity can be inferred from the target's measured x-ray emission. Target and laser parameters must be optimized to reach specific plasma conditions and meet x-ray emissionmore » requirements. The HYDRA radiation hydrodynamics code is used to investigate the effects of modifying laser irradiance and target dimensions on the plasma conditions, x-ray emission, and inferred opacity of iron and iron-magnesium buried layer targets. It was determined that plasma conditions are dominantly controlled by the laser energy and the tamper thickness. The accuracy of the inferred opacity is sensitive to tamper emission and optical depth effects. Experiments at conditions relevant to the radiative zone of the sun would investigate the validity of opacity theories important to resolving disagreements between solar parameters calculated from models and observations.« less
NASA Astrophysics Data System (ADS)
Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy
2017-10-01
The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Nute, Jessica L; Jacobsen, Megan C; Stefan, Wolfgang; Wei, Wei; Cody, Dianna D
2018-04-01
A prototype QC phantom system and analysis process were developed to characterize the spectral capabilities of a fast kV-switching dual-energy computed tomography (DECT) scanner. This work addresses the current lack of quantitative oversight for this technology, with the goal of identifying relevant scan parameters and test metrics instrumental to the development of a dual-energy quality control (DEQC). A prototype elliptical phantom (effective diameter: 35 cm) was designed with multiple material inserts for DECT imaging. Inserts included tissue equivalent and material rods (including iodine and calcium at varying concentrations). The phantom was scanned on a fast kV-switching DECT system using 16 dual-energy acquisitions (CTDIvol range: 10.3-62 mGy) with varying pitch, rotation time, and tube current. The circular head phantom (22 cm diameter) was scanned using a similar protocol (12 acquisitions; CTDIvol range: 36.7-132.6 mGy). All acquisitions were reconstructed at 50, 70, 110, and 140 keV and using a water-iodine material basis pair. The images were evaluated for iodine quantification accuracy, stability of monoenergetic reconstruction CT number, noise, and positional constancy. Variance component analysis was used to identify technique parameters that drove deviations in test metrics. Variances were compared to thresholds derived from manufacturer tolerances to determine technique parameters that had a nominally significant effect on test metrics. Iodine quantification error was largely unaffected by any of the technique parameters investigated. Monoenergetic HU stability was found to be affected by mAs, with a threshold under which spectral separation was unsuccessful, diminishing the utility of DECT imaging. Noise was found to be affected by CTDIvol in the DEQC body phantom, and CTDIvol and mA in the DEQC head phantom. Positional constancy was found to be affected by mAs in the DEQC body phantom and mA in the DEQC head phantom. A streamlined scan protocol was developed to further investigate the effects of CTDIvol and rotation time while limiting data collection to the DEQC body phantom. Further data collection will be pursued to determine baseline values and statistically based failure thresholds for the validation of long-term DECT scanner performance. © 2018 American Association of Physicists in Medicine.
Urban Planning and Management Information Systems Analysis and Design Based on GIS
NASA Astrophysics Data System (ADS)
Xin, Wang
Based on the analysis of existing relevant systems on the basis of inadequate, after a detailed investigation and research, urban planning and management information system will be designed for three-tier structure system, under the LAN using C/S mode architecture. Related functions for the system designed in accordance with the requirements of the architecture design of the functional relationships between the modules. Analysis of the relevant interface and design, data storage solutions proposed. The design for small and medium urban planning information system provides a viable building program.
Absorption, distribution, metabolism, and excretion (ADME) parameters represent important connections between exposure to chemicals and the activation of molecular initiating events of Adverse Outcome Pathways (AOPs) in cellular, tissue, and organ level targets. ADME parameters u...
Using HEC-HMS: Application to Karkheh river basin
USDA-ARS?s Scientific Manuscript database
This paper aims to facilitate the use of HEC-HMS model using a systematic event-based technique for manual calibration of soil moisture accounting and snowmelt degree-day parameters. Manual calibration, which helps ensure the HEC-HMS parameter values are physically-relevant, is often a time-consumin...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Joshua M.
Manufacturing tasks that are deemed too hazardous for workers require the use of automation, robotics, and/or other remote handling tools. The associated hazards may be radiological or nonradiological, and based on the characteristics of the environment and processing, a design may necessitate robotic labor, human labor, or both. There are also other factors such as cost, ergonomics, maintenance, and efficiency that also effect task allocation and other design choices. Handling the tradeoffs of these factors can be complex, and lack of experience can be an issue when trying to determine if and what feasible automation/robotics options exist. To address thismore » problem, we utilize common engineering design approaches adapted more for manufacturing system design in hazardous environments. We limit our scope to the conceptual and embodiment design stages, specifically a computational algorithm for concept generation and early design evaluation. In regard to concept generation, we first develop the functional model or function structure for the process, using the common 'verb-noun' format for describing function. A common language or functional basis for manufacturing was developed and utilized to formalize function descriptions and guide rules for function decomposition. Potential components for embodiment are also grouped in terms of this functional language and are stored in a database. The properties of each component are given as quantitative and qualitative criteria. Operators are also rated for task-relevant criteria which are used to address task compatibility. Through the gathering of process requirements/constraints, construction of the component database, and development of the manufacturing basis and rule set, design knowledge is stored and available for computer use. Thus, once the higher level process functions are defined, the computer can automate the synthesis of new design concepts through alternating steps of embodiment and function structure updates/decomposition. In the process, criteria guide function allocation of components/operators and help ensure compatibility and feasibility. Through multiple function assignment options and varied function structures, multiple design concepts are created. All of the generated designs are then evaluated based on a number of relevant evaluation criteria: cost, dose, ergonomics, hazards, efficiency, etc. These criteria are computed using physical properties/parameters of each system based on the qualities an engineer would use to make evaluations. Nuclear processes such as oxide conversion and electrorefining are utilized to aid algorithm development and provide test cases for the completed program. Through our approach, we capture design knowledge related to manufacturing and other operations in hazardous environments to enable a computational program to automatically generate and evaluate system design concepts.« less
Supporting pre-service science teachers in developing culturally relevant pedagogy
NASA Astrophysics Data System (ADS)
Krajeski, Stephen
This study employed a case study methodology to investigate a near-authentic intervention program designed to support the development of culturally relevant pedagogy and its impact on pre-service science teachers' notions of culturally relevant pedagogy. The unit of analysis for this study was the discourse of pre-service science teachers enrolled in a second semester science methods course, which was the site of the intervention program. Data for this study was collected from videos of classroom observations, audio recordings of personal interviews, and artifacts created by the pre-service science teachers during the class. To determine how effective science teacher certification programs are at supporting the development of culturally relevant pedagogy without an immersion aspect, two research questions were investigated: 1) How do pre-service science teachers view and design pedagogy while participating in an intervention designed to support the development of culturally relevant pedagogy? 2) How do pre-service science teachers view the importance of culturally relevant pedagogy for supporting student learning? How do their practices in the field change these initial views?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada Rodas, Ernesto A.; Neu, Richard W.
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Estrada Rodas, Ernesto A.; Neu, Richard W.
2017-09-11
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Doret, Muriel; Spilka, Jiří; Chudáček, Václav; Gonçalves, Paulo; Abry, Patrice
2015-01-01
Background The fetal heart rate (FHR) is commonly monitored during labor to detect early fetal acidosis. FHR variability is traditionally investigated using Fourier transform, often with adult predefined frequency band powers and the corresponding LF/HF ratio. However, fetal conditions differ from adults and modify spectrum repartition along frequencies. Aims This study questions the arbitrariness definition and relevance of the frequency band splitting procedure, and thus of the calculation of the underlying LF/HF ratio, as efficient tools for characterizing intrapartum FHR variability. Study Design The last 30 minutes before delivery of the intrapartum FHR were analyzed. Subjects Case-control study. A total of 45 singletons divided into two groups based on umbilical cord arterial pH: the Index group with pH ≤ 7.05 (n = 15) and Control group with pH > 7.05 (n = 30). Outcome Measures Frequency band-based LF/HF ratio and Hurst parameter. Results This study shows that the intrapartum FHR is characterized by fractal temporal dynamics and promotes the Hurst parameter as a potential marker of fetal acidosis. This parameter preserves the intuition of a power frequency balance, while avoiding the frequency band splitting procedure and thus the arbitrary choice of a frequency separating bands. The study also shows that extending the frequency range covered by the adult-based bands to higher and lower frequencies permits the Hurst parameter to achieve better performance for identifying fetal acidosis. Conclusions The Hurst parameter provides a robust and versatile tool for quantifying FHR variability, yields better acidosis detection performance compared to the LF/HF ratio, and avoids arbitrariness in spectral band splitting and definitions. PMID:26322889
Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects
NASA Astrophysics Data System (ADS)
Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.
2013-08-01
In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.
NASA Astrophysics Data System (ADS)
Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane
2018-04-01
The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellesen, C.; Skiba, M., E-mail: mateusz.skiba@physics.uu.se; Dzysiuk, N.
2014-11-15
The fuel ion ratio n{sub t}/n{sub d} is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., n{sub t}/n{sub d} = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands thatmore » a DT spectrometer has to fulfill to be able to determine n{sub t}/n{sub d} with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.« less
Section 4. The GIS Weasel User's Manual
Viger, Roland J.; Leavesley, George H.
2007-01-01
INTRODUCTION The GIS Weasel was designed to aid in the preparation of spatial information for input to lumped and distributed parameter hydrologic or other environmental models. The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to a user's model and to generate parameters from those maps. The operation of the GIS Weasel does not require the user to be a GIS expert, only that the user have an understanding of the spatial information requirements of the environmental simulation model being used. The GIS Weasel software system uses a GIS-based graphical user interface (GUI), the C programming language, and external scripting languages. The software will run on any computing platform where ArcInfo Workstation (version 8.0.2 or later) and the GRID extension are accessible. The user controls the processing of the GIS Weasel by interacting with menus, maps, and tables. The purpose of this document is to describe the operation of the software. This document is not intended to describe the usage of this software in support of any particular environmental simulation model. Such guides are published separately.
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
Smart Contrast Agents for Magnetic Resonance Imaging.
Bonnet, Célia S; Tóth, Éva
2016-01-01
By visualizing bioactive molecules or biological parameters in vivo, molecular imaging is searching for information at the molecular level in living organisms. In addition to contributing to earlier and more personalized diagnosis in medicine, it also helps understand and rationalize the molecular factors underlying physiological and pathological processes. In magnetic resonance imaging (MRI), complexes of paramagnetic metal ions, mostly lanthanides, are commonly used to enhance the intrinsic image contrast. They rely either on the relaxation effect of these metal chelates (T(1) agents), or on the phenomenon of paramagnetic chemical exchange saturation transfer (PARACEST agents). In both cases, responsive molecular magnetic resonance imaging probes can be designed to report on various biomarkers of biological interest. In this context, we review recent work in the literature and from our group on responsive T(1) and PARACEST MRI agents for the detection of biogenic metal ions (such as calcium or zinc), enzymatic activities, or neurotransmitter release. These examples illustrate the general strategies that can be applied to create molecular imaging agents with an MRI detectable response to biologically relevant parameters.
NASA Technical Reports Server (NTRS)
Park, Brooke Anderson; Wright, Henry
2012-01-01
PatCon code was developed to help mission designers run trade studies on launch and arrival times for any given planet. Initially developed in Fortran, the required inputs included launch date, arrival date, and other orbital parameters of the launch planet and arrival planets at the given dates. These parameters include the position of the planets, the eccentricity, semi-major axes, argument of periapsis, ascending node, and inclination of the planets. With these inputs, a patched conic approximation is used to determine the trajectory. The patched conic approximation divides the planetary mission into three parts: (1) the departure phase, in which the two relevant bodies are Earth and the spacecraft, and where the trajectory is a departure hyperbola with Earth at the focus; (2) the cruise phase, in which the two bodies are the Sun and the spacecraft, and where the trajectory is a transfer ellipse with the Sun at the focus; and (3) the arrival phase, in which the two bodies are the target planet and the spacecraft, where the trajectory is an arrival hyperbola with the planet as the focus.
Veloso-Durán, Ana; Vazquez-Salceda, Carmen M.; López-Jiménez, Julian; Veloso-Durán, Margarita
2014-01-01
Objectives: To asses whether dental eruption order can play a role in the early diagnosis of crossed laterality. Study Design: Dental eruption pattern along with eye, ear, hand and foot lateralism were examined on 131 children between 6 to 8 years old from public schools from a multietnic population area of Barcelona city. Statistic methods (Statgraphics Plus 5.1 program) were used to evaluate data recollected. Results: Only foot and dentition lateralities behave as independent variables regarding hand laterality. So dental eruption laterality (along with the foot one) would be one of the parameters more related to hand laterality given that dentition variable relationship is greater that the foot one. This suggests that tooth eruption could be more clinically relevant. Crossed laterality hand-foot is significantly more predominant in men (13%) than in women (1,6%). Meanwhile, the relationship between hand and dentition didn’t show any influence of sex. Conclusions: Dental eruption order, can be used as a good parameter in the determination of the patient’s laterality. Key words:Dentition, dental eruption, motor laterality, crossed laterality. PMID:24608220
ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation
NASA Technical Reports Server (NTRS)
Richardson, A. O.
1996-01-01
This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.
Computational study on the behaviors of granular materials under mechanical cycling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoliang; Ye, Minyou; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn
2015-11-07
Considering that fusion pebble beds are probably subjected to the cyclic compression excitation in their future applications, we presented a computational study to report the effect of mechanical cycling on the behaviors of granular matter. The correctness of our numerical experiments was confirmed by a comparison with the effective medium theory. Under the cyclic loads, the fast granular compaction was observed to evolve in a stretched exponential law. Besides, the increasing stiffening in packing structure, especially the decreasing moduli pressure dependence due to granular consolidation, was also observed. For the force chains inside the pebble beds, both the internal forcemore » distribution and the spatial distribution of force chains would become increasingly uniform as the external force perturbation proceeded and therefore produced the stress relief on grains. In this case, the originally proposed 3-parameter Mueth function was found to fail to describe the internal force distribution. Thereby, its improved functional form with 4 parameters was proposed here and proved to better fit the data. These findings will provide more detailed information on the pebble beds for the relevant fusion design and analysis.« less
NASA Astrophysics Data System (ADS)
Rashidi, A.; Nami, M.; Monavarian, M.; Aragon, A.; DaVico, K.; Ayoub, F.; Mishkat-Ul-Masabih, S.; Rishinaramangalam, A.; Feezell, D.
2017-07-01
This work describes a small-signal microwave method for determining the differential carrier lifetime and transport effects in electrically injected InGaN/GaN light-emitting diodes (LEDs). By considering the carrier diffusion, capture, thermionic escape, and recombination, the rate equations are used to derive an equivalent small-signal electrical circuit for the LEDs, from which expressions for the input impedance and modulation response are obtained. The expressions are simultaneously fit to the experimental data for the input impedance and modulation response for nonpolar InGaN/GaN micro-LEDs on free-standing GaN substrates. The fittings are used to extract the transport related circuit parameters and differential carrier lifetimes. The dependence of the parameters on the device diameter and current density is reported. We also derive approximations for the modulation response under low and high injection levels and show that the transport of carriers affects the modulation response of the device, especially at low injection levels. The methods presented are relevant to the design of high-speed LEDs for visible-light communication.
Effect of pilot-scale aseptic processing on tomato soup quality parameters.
Colle, Ines J P; Andrys, Anna; Grundelius, Andrea; Lemmens, Lien; Löfgren, Anders; Buggenhout, Sandy Van; Loey, Ann; Hendrickx, Marc Van
2011-01-01
Tomatoes are often processed into shelf-stable products. However, the different processing steps might have an impact on the product quality. In this study, a model tomato soup was prepared and the impact of pilot-scale aseptic processing, including heat treatment and high-pressure homogenization, on some selected quality parameters was evaluated. The vitamin C content, the lycopene isomer content, and the lycopene bioaccessibility were considered as health-promoting attributes. As a structural characteristic, the viscosity of the tomato soup was investigated. A tomato soup without oil as well as a tomato soup containing 5% olive oil were evaluated. Thermal processing had a negative effect on the vitamin C content, while lycopene degradation was limited. For both compounds, high-pressure homogenization caused additional losses. High-pressure homogenization also resulted in a higher viscosity that was accompanied by a decrease in lycopene bioaccessibility. The presence of lipids clearly enhanced the lycopene isomerization susceptibility and improved the bioaccessibility. The results obtained in this study are of relevance for product formulation and process design of tomato-based food products. © 2011 Institute of Food Technologists®
Bandgaps and directional properties of two-dimensional square beam-like zigzag lattices
NASA Astrophysics Data System (ADS)
Wang, Yan-Feng; Wang, Yue-Sheng; Zhang, Chuanzeng
2014-12-01
In this paper we propose four kinds of two-dimensional square beam-like zigzag lattice structures and study their bandgaps and directional propagation of elastic waves. The band structures are calculated by using the finite element method. Both the in-plane and out-of-plane waves are investigated simultaneously via the three-dimensional Euler beam elements. The mechanism of the bandgap generation is analyzed by studying the vibration modes at the bandgap edges. The effects of the geometry parameters of the xy- and z-zigzag lattices on the bandgaps are investigated and discussed. Multiple complete bandgaps are found owing to the separation of the degeneracy by introducing bending arms. The bandgaps are sensitive to the geometry parameters of the periodic systems. The deformed displacement fields of the harmonic responses of a finite lattice structure subjected to harmonic loads at different positions are illustrated to show the directional wave propagation. An extension of the proposed concept to the hexagonal lattices is also presented. The research work in this paper is relevant to the practical design of cellular structures with enhanced vibro-acoustics performance.
Relevant parameter space and stability of spherical tokamaks with a plasma center column
NASA Astrophysics Data System (ADS)
Lampugnani, L. G.; Garcia-Martinez, P. L.; Farengo, R.
2017-02-01
A spherical tokamak (ST) with a plasma center column (PCC) can be formed inside a simply connected chamber via driven magnetic relaxation. From a practical perspective, the ST-PCC could overcome many difficulties associated with the material center column of the standard ST reactor design. Besides, the ST-PCC concept can be regarded as an advanced helicity injected device that would enable novel experiments on the key physics of magnetic relaxation and reconnection. This is because the concept includes not only a PCC but also a coaxial helicity injector (CHI). This combination implies an improved level of flexibility in the helicity injection scheme required for the formation and sustainment phases. In this work, the parameter space determining the magnetic structure of the ST-PCC equilibria is studied under the assumption of fully relaxed plasmas. In particular, it is shown that the effect of the external bias field of the PCC and the CHI essentially depends on a single parameter that measures the relative amount of flux of these two entities. The effect of plasma elongation on the safety factor profile and the stability to the tilt mode are also analyzed. In the first part of this work, the stability of the system is explained in terms of the minimum energy principle, and relevant stability maps are constructed. While this picture provides an adequate insight into the underlying physics of the instability, it does not include the stabilizing effect of line-tying at the electrodes. In the second part, a dynamical stability analysis of the ST-PCC configurations, including the effect of line-tying, is performed by numerically solving the magnetohydrodynamic equations. A significant stability enhancement is observed when the PCC contains more than the 70% of the total external bias flux, and the elongation is not higher than two.
Identification and risk estimation of movement strategies during cutting maneuvers.
David, Sina; Komnik, Igor; Peters, Markus; Funken, Johannes; Potthast, Wolfgang
2017-12-01
Approximately 70% of anterior cruciate ligament (ACL) injuries occur in non-contact situations during cutting and landing maneuvers. Parameters such as footstrike patterns and trunk orientation were found to influence ACL relevant knee loading, however, the relationship between the whole body movement and injury risk is debated. This study identifies whole body movement strategies that increase injury risk, and provides training recommendations to reduce this risk or enable a save return to sports after injury. Experimental cross-sectional study design. Three dimensional movement analysis was carried out to investigate 50 participants performing anticipated 90° cutting maneuvers. To identify and characterize movement strategies, footstrike pattern, knee valgus moment, knee internal rotation moment, angle of attack, shoulder and pelvis axis were analyzed using statistical parametric mapping. Three different movement strategies were identified. One strategy included rearfoot striking in combination with a relatively upright body position which generated higher knee joint loads than the second strategy, forefoot striking in combination with more backwards leaning and pre-rotation of the trunk towards the new movement direction. A third strategy combined forefoot striking with less preorientation which increased the ACL relevant knee joint load compared to the second strategy. The identified movement strategies clearly pre-determine the injury risk during non-contact situations with the third strategy as the most unfavorable one. Compared to the study of isolated parameters, the analysis of the whole body movement allowed for detailed separation of more risky from less risky cutting strategies. These results give practical recommendations for the prevention of ACL injury. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, Zhichao; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2018-02-01
Mechanically-guided 3D assembly based on controlled, compressive buckling represents a promising, emerging approach for forming complex 3D mesostructures in advanced materials. Due to the versatile applicability to a broad set of material types (including device-grade single-crystal silicon) over length scales from nanometers to centimeters, a wide range of novel applications have been demonstrated in soft electronic systems, interactive bio-interfaces as well as tunable electromagnetic devices. Previously reported 3D designs relied mainly on finite element analyses (FEA) as a guide, but the massive numerical simulations and computational efforts necessary to obtain the assembly parameters for a targeted 3D geometry prevent rapid exploration of engineering options. A systematic understanding of the relationship between a 3D shape and the associated parameters for assembly requires the development of a general theory for the postbuckling process. In this paper, a double perturbation method is established for the postbuckling analyses of planar curved beams, of direct relevance to the assembly of ribbon-shaped 3D mesostructures. By introducing two perturbation parameters related to the initial configuration and the deformation, the highly nonlinear governing equations can be transformed into a series of solvable, linear equations that give analytic solutions to the displacements and curvatures during postbuckling. Systematic analyses of postbuckling in three representative ribbon shapes (sinusoidal, polynomial and arc configurations) illustrate the validity of theoretical method, through comparisons to the results of experiment and FEA. These results shed light on the relationship between the important deformation quantities (e.g., mode ratio and maximum strain) and the assembly parameters (e.g., initial configuration and the applied strain). This double perturbation method provides an attractive route to the inverse design of ribbon-shaped 3D geometries, as demonstrated in a class of helical mesostructures.
NASA Astrophysics Data System (ADS)
Francisco, Arthur; Blondel, Cécile; Brunetière, Noël; Ramdarshan, Anusha; Merceron, Gildas
2018-03-01
Tooth wear and, more specifically, dental microwear texture is a dietary proxy that has been used for years in vertebrate paleoecology and ecology. DMTA, dental microwear texture analysis, relies on a few parameters related to the surface complexity, anisotropy and heterogeneity of the enamel facets at the micrometric scale. Working with few but physically meaningful parameters helps in comparing published results and in defining levels for classification purposes. Other dental microwear approaches are based on ISO parameters and coupled with statistical tests to find the more relevant ones. The present study roughly utilizes most of the aforementioned parameters in their more or less modified form. But more than parameters, we here propose a new approach: instead of a single parameter characterizing the whole surface, we sample the surface and thus generate 9 derived parameters in order to broaden the parameter set. The identification of the most discriminative parameters is performed with an automated procedure which is an extended and refined version of the workflows encountered in some studies. The procedure in its initial form includes the most common tools, like the ANOVA and the correlation analysis, along with the required mathematical tests. The discrimination results show that a simplified form of the procedure is able to more efficiently identify the desired number of discriminative parameters. Also highlighted are some trends like the relevance of working with both height and spatial parameters, as well as the potential benefits of dimensionless surfaces. On a set of 45 surfaces issued from 45 specimens of three modern ruminants with differences in feeding preferences (grazing, leaf-browsing and fruit-eating), it is clearly shown that the level of wear discrimination is improved with the new methodology compared to the other ones.
Modular and Adaptive Control of Sound Processing
NASA Astrophysics Data System (ADS)
van Nort, Douglas
This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.
NASA Technical Reports Server (NTRS)
Van Dyke, Michael B.
2014-01-01
During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.
Clinical tooth preparations and associated measuring methods: a systematic review.
Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J
2015-03-01
The geometries of tooth preparations are important features that aid in the retention and resistance of cemented complete crowns. The clinically relevant values and the methods used to measure these are not clear. The purpose of this systematic review was to retrieve, organize, and critically appraise studies measuring clinical tooth preparation parameters, specifically the methodology used to measure the preparation geometry. A database search was performed in Scopus, PubMed, and ScienceDirect with an additional hand search on December 5, 2013. The articles were screened for inclusion and exclusion criteria and information regarding the total occlusal convergence (TOC) angle, margin design, and associated measuring methods were extracted. The values and associated measuring methods were tabulated. A total of 1006 publications were initially retrieved. After removing duplicates and filtering by using exclusion and inclusion criteria, 983 articles were excluded. Twenty-three articles reported clinical tooth preparation values. Twenty articles reported the TOC, 4 articles reported margin designs, 4 articles reported margin angles, and 3 articles reported the abutment height of preparations. A variety of methods were used to measure these parameters. TOC values seem to be the most important preparation parameter. Recommended TOC values have increased over the past 4 decades from an unachievable 2- to 5-degree taper to a more realistic 10 to 22 degrees. Recommended values are more likely to be achieved under experimental conditions if crown preparations are performed outside of the mouth. We recommend that a standardized measurement method based on the cross sections of crown preparations and standardized reporting be developed for future studies analyzing preparation geometry. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Del Amo, Eva M; Urtti, Arto
2015-08-01
Intravitreal administration is the method of choice in drug delivery to the retina and/or choroid. Rabbit is the most commonly used animal species in intravitreal pharmacokinetics, but it has been criticized as being a poor model of human eye. The critique is based on some anatomical differences, properties of the vitreous humor, and observed differences in drug concentrations in the anterior chamber after intravitreal injections. We have systematically analyzed all published information on intravitreal pharmacokinetics in the rabbit and human eye. The analysis revealed major problems in the design of the pharmacokinetic studies. In this review we provide advice for study design. Overall, the pharmacokinetic parameters (clearance, volume of distribution, half-life) in the human and rabbit eye have good correlation and comparable absolute values. Therefore, reliable rabbit-to-man translation of intravitreal pharmacokinetics should be feasible. The relevant anatomical and physiological parameters in rabbit and man show only small differences. Furthermore, the claimed discrepancy between drug concentrations in the human and rabbit aqueous humor is not supported by the data analysis. Based on the available and properly conducted pharmacokinetic studies, the differences in the vitreous structure in rabbits and human patients do not lead to significant pharmacokinetic differences. This review is the first step towards inter-species translation of intravitreal pharmacokinetics. More information is still needed to dissect the roles of drug delivery systems, disease states, age and ocular manipulation on the intravitreal pharmacokinetics in rabbit and man. Anyway, the published data and the derived pharmacokinetic parameters indicate that the rabbit is a useful animal model in intravitreal pharmacokinetics. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Human Fear Conditioning and Extinction in Neuroimaging: A Systematic Review
Sehlmeyer, Christina; Schöning, Sonja; Zwitserlood, Pienie; Pfleiderer, Bettina; Kircher, Tilo; Arolt, Volker; Konrad, Carsten
2009-01-01
Fear conditioning and extinction are basic forms of associative learning that have gained considerable clinical relevance in enhancing our understanding of anxiety disorders and facilitating their treatment. Modern neuroimaging techniques have significantly aided the identification of anatomical structures and networks involved in fear conditioning. On closer inspection, there is considerable variation in methodology and results between studies. This systematic review provides an overview of the current neuroimaging literature on fear conditioning and extinction on healthy subjects, taking into account methodological issues such as the conditioning paradigm. A Pubmed search, as of December 2008, was performed and supplemented by manual searches of bibliographies of key articles. Two independent reviewers made the final study selection and data extraction. A total of 46 studies on cued fear conditioning and/or extinction on healthy volunteers using positron emission tomography or functional magnetic resonance imaging were reviewed. The influence of specific experimental factors, such as contingency and timing parameters, assessment of conditioned responses, and characteristics of conditioned and unconditioned stimuli, on cerebral activation patterns was examined. Results were summarized descriptively. A network consisting of fear-related brain areas, such as amygdala, insula, and anterior cingulate cortex, is activated independently of design parameters. However, some neuroimaging studies do not report these findings in the presence of methodological heterogeneities. Furthermore, other brain areas are differentially activated, depending on specific design parameters. These include stronger hippocampal activation in trace conditioning and tactile stimulation. Furthermore, tactile unconditioned stimuli enhance activation of pain related, motor, and somatosensory areas. Differences concerning experimental factors may partly explain the variance between neuroimaging investigations on human fear conditioning and extinction and should, therefore, be taken into serious consideration in the planning and the interpretation of research projects. PMID:19517024
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himmerkus, Felix; Rittmeyer, Cornelia
2012-07-01
The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interimmore » products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)« less
NASA Astrophysics Data System (ADS)
Sahagian, D.; Prentice, C.
2004-12-01
A great deal of time, effort and resources have been expended on global change research to date, but dissemination and visualization of the key pertinent data sets has been problematical. Toward that end, we are constructing an Earth System Atlas which will serve as a single compendium describing the state of the art in our understanding of the Earth system and how it has responded to and is likely to respond to natural and anthropogenic perturbations. The Atlas is an interactive web-based system of data bases and data manipulation tools and so is much more than a collection of pre-made maps posted on the web. It represents a tool for assembling, manipulating, and displaying specific data as selected and customized by the user. Maps are created "on the fly" according to user-specified instructions. The information contained in the Atlas represents the growing body of data assembled by the broader Earth system research community, and can be displayed in the form of maps and time series of the various relevant parameters that drive and are driven by changes in the Earth system at various time scales. The Atlas is designed to display the information assembled by the global change research community in the form of maps and time series of all the relevant parameters that drive or are driven by changes in the Earth System at various time scales. This will serve to provide existing data to the community, but also will help to highlight data gaps that may hinder our understanding of critical components of the Earth system. This new approach to handling Earth system data is unique in several ways. First and foremost, data must be peer-reviewed. Further, it is designed to draw on the expertise and products of extensive international research networks rather than on a limited number of projects or institutions. It provides explanatory explanations targeted to the user's needs, and the display of maps and time series can be customize by the user. In general, the Atlas is designed provide the research community with a new opportunity for data observation and manipulation, enabling new scientific discoveries in the coming years. An initial prototype of the Atlas has been developed and can be manipulated in real time.
A design methodology for nonlinear systems containing parameter uncertainty
NASA Technical Reports Server (NTRS)
Young, G. E.; Auslander, D. M.
1983-01-01
In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.
FUB at TREC 2008 Relevance Feedback Track: Extending Rocchio with Distributional Term Analysis
2008-11-01
starting point is the improved version [ Salton and Buckley 1990] of the original Rocchio’s formula [Rocchio 1971]: newQ = α ⋅ origQ + β R r r∈R ∑ − γR...earlier studies about the low effect of the main relevance feedback parameters on retrieval performance (e.g., Salton and Buckley 1990), while they seem...Relevance feedback in information retrieval. In The SMART retrieval system - experiments in automatic document processing, Salton , G., Ed., Prentice Hall
Analysis of Design Parameters Effects on Vibration Characteristics of Fluidlastic Isolators
NASA Astrophysics Data System (ADS)
Deng, Jing-hui; Cheng, Qi-you
2017-07-01
The control of vibration in helicopters which consists of reducing vibration levels below the acceptable limit is one of the key problems. The fluidlastic isolators become more and more widely used because the fluids are non-toxic, non-corrosive, nonflammable, and compatible with most elastomers and adhesives. In the field of the fluidlastic isolators design, the selection of design parameters is very important to obtain efficient vibration-suppressed. Aiming at getting the effect of design parameters on the property of fluidlastic isolator, a dynamic equation is set up based on the theory of dynamics. And the dynamic analysis is carried out. The influences of design parameters on the property of fluidlastic isolator are calculated. Dynamic analysis results have shown that fluidlastic isolator can reduce the vibration effectively. Analysis results also showed that the design parameters such as the fluid density, viscosity coefficient, stiffness (K1 and K2) and loss coefficient have obvious influence on the performance of isolator. The efficient vibration-suppressed can be obtained by the design optimization of parameters.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Oddone, Francesco; Lucenteforte, Ersilia; Michelessi, Manuele; Rizzo, Stanislao; Donati, Simone; Parravano, Mariacristina; Virgili, Gianni
2016-05-01
Macular parameters have been proposed as an alternative to retinal nerve fiber layer (RNFL) parameters to diagnose glaucoma. Comparing the diagnostic accuracy of macular parameters, specifically the ganglion cell complex (GCC) and ganglion cell inner plexiform layer (GCIPL), with the accuracy of RNFL parameters for detecting manifest glaucoma is important to guide clinical practice and future research. Studies using spectral domain optical coherence tomography (SD OCT) and reporting macular parameters were included if they allowed the extraction of accuracy data for diagnosing manifest glaucoma, as confirmed with automated perimetry or a clinician's optic nerve head (ONH) assessment. Cross-sectional cohort studies and case-control studies were included. The QUADAS 2 tool was used to assess methodological quality. Only direct comparisons of macular versus RNFL parameters (i.e., in the same study) were conducted. Summary sensitivity and specificity of each macular or RNFL parameter were reported, and the relative diagnostic odds ratio (DOR) was calculated in hierarchical summary receiver operating characteristic (HSROC) models to compare them. Thirty-four studies investigated macular parameters using RTVue OCT (Optovue Inc., Fremont, CA) (19 studies, 3094 subjects), Cirrus OCT (Carl Zeiss Meditec Inc., Dublin, CA) (14 studies, 2164 subjects), or 3D Topcon OCT (Topcon, Inc., Tokyo, Japan) (4 studies, 522 subjects). Thirty-two of these studies allowed comparisons between macular and RNFL parameters. Studies generally reported sensitivities at fixed specificities, more commonly 0.90 or 0.95, with sensitivities of most best-performing parameters between 0.65 and 0.75. For all OCT devices, compared with RNFL parameters, macular parameters were similarly or slightly less accurate for detecting glaucoma at the highest reported specificity, which was confirmed in analyses at the lowest specificity. Included studies suffered from limitations, especially the case-control study design, which is known to overestimate accuracy. However, this flaw is less relevant as a source of bias in direct comparisons conducted within studies. With the use of OCT, RNFL parameters are still preferable to macular parameters for diagnosing manifest glaucoma, but the differences are small. Because of high heterogeneity, direct comparative or randomized studies of OCT devices or OCT parameters and diagnostic strategies are essential. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lawrence, K. Deepak; Ramamoorthy, B.
2016-03-01
Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.
Dynamic Stability Analysis of Blunt Body Entry Vehicles Using Time-Lagged Aftbody Pitching Moments
NASA Technical Reports Server (NTRS)
Kazemba, Cole D.; Braun, Robert D.; Schoenenberger, Mark; Clark, Ian G.
2013-01-01
This analysis defines an analytic model for the pitching motion of blunt bodies during atmospheric entry. The proposed model is independent of the pitch damping sum coefficient present in the standard formulation of the equations of motion describing pitch oscillations of a decelerating blunt body, instead using the principle of a time-lagged aftbody moment as the forcing function for oscillation divergence. Four parameters, all with intuitive physical relevance, are introduced to fully define the aftbody moment and the associated time delay. It is shown that the dynamic oscillation responses typical to blunt bodies can be produced using hysteresis of the aftbody moment in place of the pitch damping coefficient. The approach used in this investigation is shown to be useful in understanding the governing physical mechanisms for blunt body dynamic stability and in guiding vehicle and mission design requirements. A validation case study using simulated ballistic range test data is conducted. From this, parameter identification is carried out through the use of a least squares optimizing routine. Results show good agreement with the limited existing literature for the parameters identified, suggesting that the model proposed could be validated by an experimental ballistic range test series. The trajectories produced by the identified parameters were found to match closely those from the MER ballistic range tests for a wide array of initial conditions and can be identified with a reasonable number of ballistic range shots and computational effort.
Noguera-Julian, Marc; Bellido, Rocío; Puertas, Maria C.; Carrillo, Jorge; Rodriguez, C.; Perez-Alvarez, Núria; Cobarsí, Patricia; Gomez, Carmen E.; Esteban, Mariano; Jímenez, Jose Luis; García, Felipe; Blanco, Julià; Martinez-Picado, Javier; Paredes, Roger
2017-01-01
The most relevant endpoint in therapeutic HIV vaccination is the assessment of time to viral rebound or duration of sustained control of low-level viremia upon cART treatment cessation. Structured treatment interruptions (STI) are however not without risk to the patient and reliable predictors of viral rebound/control after therapeutic HIV-1 vaccination are urgently needed to ensure patient safety and guide therapeutic vaccine development. Here, we integrated immunological and virological parameters together with viral rebound dynamics after STI in a phase I therapeutic vaccine trial of a polyvalent MVA-B vaccine candidate to define predictors of viral control. Clinical parameters, proviral DNA, host HLA genetics and measures of humoral and cellular immunity were evaluated. A sieve effect analysis was conducted comparing pre-treatment viral sequences to breakthrough viruses after STI. Our results show that a reduced proviral HIV-1 DNA at study entry was independently associated with two virological parameters, delayed HIV-1 RNA rebound (p = 0.029) and lower peak viremia after treatment cessation (p = 0.019). Reduced peak viremia was also positively correlated with a decreased number of HLA class I allele associated polymorphisms in Gag sequences in the rebounding virus population (p = 0.012). Our findings suggest that proviral DNA levels and the number of HLA-associated Gag polymorphisms may have an impact on the clinical outcome of STI. Incorporation of these parameters in future therapeutic vaccine trials may guide refined immunogen design and help conduct safer STI approaches. PMID:28953921