Two-step tunneling technique of deep brain stimulation extension wires-a description.
Fontaine, Denys; Vandersteen, Clair; Saleh, Christian; von Langsdorff, Daniel; Poissonnet, Gilles
2013-12-01
While a significant body of literature exists on the intracranial part of deep brain stimulation surgery, the equally important second part of the intervention related to the subcutaneous tunneling of deep brain stimulation extension wires is rarely described. The tunneling strategy can consist of a single passage of the extension wires from the frontal incision site to the subclavicular area, or of a two-step approach that adds a retro-auricular counter-incision. Each technique harbors the risk of intraoperative and postoperative complications. At our center, we perform a two-step tunneling procedure that we developed based on a cadaveric study. In 125 consecutive patients operated since 2002, we did not encounter any complication related to our tunneling method. Insufficient data exist to fully evaluate the advantages and disadvantages of each tunneling technique. It is of critical importance that authors detail their tunneling modus operandi and report the presence or absence of complications. This gathered data pool may help to formulate a definitive conclusions on the safest method for subcutaneous tunneling of extension wires in deep brain stimulation.
Mendez-Gallardo, Valerie; Roberto, Megan E.; Kauer, Sierra D.; Brumley, Michele R.
2015-01-01
The development of postural control is considered an important factor for the expression of coordinated behavior such as locomotion. In the natural setting of the nest, newborn rat pups adapt their posture to perform behaviors of ecological relevance such as those related to suckling. The current study explores the role of posture in the expression of three behaviors in the newborn rat: spontaneous limb activity, locomotor-like stepping behavior, and the leg extension response (LER). One-day-old rat pups were tested in one of two postures – prone or supine – on each of these behavioral measures. Results showed that pups expressed more spontaneous activity while supine, more stepping while prone, and no differences in LER expression between the two postures. Together these findings show that posture affects the expression of newborn behavior patterns in different ways, and suggest that posture may act as a facilitator or a limiting factor in the expression of different behaviors during early development. PMID:26655784
20180312 - US EPA-Unilever Collaborative Partnership: Accomplishments and Next Steps (SOT)
Reiterating the goals of the partnership, this summarizes the accomplishments and progress made so far in the collaboration, and announces a two phase extension focused on further development of high-throughput transcriptomic platform
Galvez, Jose A; Budovitch, Amy; Harkema, Susan J; Reinkensmeyer, David J
2011-01-01
Robotic devices are being developed to automate repetitive aspects of walking retraining after neurological injuries, in part because they might improve the consistency and quality of training. However, it is unclear how inconsistent manual training actually is or whether stepping quality depends strongly on the trainers' manual skill. The objective of this study was to quantify trainer variability of manual skill during step training using body-weight support on a treadmill and assess factors of trainer skill. We attached a sensorized orthosis to one leg of each patient with spinal cord injury and measured the shank kinematics and forces exerted by different trainers during six training sessions. An expert trainer rated the trainers' skill level based on videotape recordings. Between-trainer force variability was substantial, about two times greater than within-trainer variability. Trainer skill rating correlated strongly with two gait features: better knee extension during stance and fewer episodes of toe dragging. Better knee extension correlated directly with larger knee horizontal assistance force, but better toe clearance did not correlate with larger ankle push-up force; rather, it correlated with better knee and hip extension. These results are useful to inform robotic gait-training design.
Terrain and refractivity effects on non-optical paths
NASA Astrophysics Data System (ADS)
Barrios, Amalia E.
1994-07-01
The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.
Python-based geometry preparation and simulation visualization toolkits for STEPS
Chen, Weiliang; De Schutter, Erik
2014-01-01
STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754
How to Create Videos for Extension Education: An Innovative Five-Step Procedure
ERIC Educational Resources Information Center
Dev, Dipti A.; Blitch, Kimberly A.; Hatton-Bowers, Holly; Ramsay, Samantha; Garcia, Aileen S.
2018-01-01
Although the benefits of using video as a learning tool in Extension programs are well known, less is understood about effective methods for creating videos. We present a five-step procedure for developing educational videos that focus on evidence-based practices, and we provide practical examples from our use of the five steps in creating a video…
Two-Carbon Homologation of Ketones to 3-Methyl Unsaturated Aldehydes
USDA-ARS?s Scientific Manuscript database
The usual scheme of two-carbon homologation of ketones to 3-methyl unsaturated aldehydes by Horner-Wadsworth-Emmons condensations with phosphonate esters, such as triethyl-2-phosphonoacetate, involves three steps. The phosphonate condensation step results in extension of the carbon chain by two carb...
Goldstein, Naomi E. S.; Kemp, Kathleen A.; Leff, Stephen S.; Lochman, John E.
2014-01-01
The use of manual-based interventions tends to improve client outcomes and promote replicability. With an increasingly strong link between funding and the use of empirically supported prevention and intervention programs, manual development and adaptation have become research priorities. As a result, researchers and scholars have generated guidelines for developing manuals from scratch, but there are no extant guidelines for adapting empirically supported, manualized prevention and intervention programs for use with new populations. Thus, this article proposes step-by-step guidelines for the manual adaptation process. It also describes two adaptations of an extensively researched anger management intervention to exemplify how an empirically supported program was systematically and efficiently adapted to achieve similar outcomes with vastly different populations in unique settings. PMID:25110403
Gomez-Pomar, Enrique; Blubaugh, Robert
2018-02-07
There is no doubt regarding the multiple benefits of breastfeeding for infants and society in general. Therefore, the World Health Organization (WHO) in a conjoint effort with United Nations International Children's Emergency Fund (UNICEF) developed the "Ten Steps to Successful Breastfeeding" in 1992, which became the backbone of the Baby Friendly Hospital Initiative (BFHI). Following this development, many hospitals and countries intensified their position towards creating a "breastfeeding oriented" practice. Over the past two decades, the interest increased in the BFHI and the Ten Steps. However, alongside the implementation of the initiative, extensive research continues to evaluate the benefits and dangers of the suggested practices. Hence, it is our intention to make a critical evaluation of the current BFHI and the Ten Steps recommendations in consideration of the importance of providing an evidence-based breastfeeding supported environment for our mothers and infants.
Maximum step length: relationships to age and knee and hip extensor capacities.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2007-07-01
Maximum Step Length may be used to identify older adults at increased risk for falls. Since leg muscle weakness is a risk factor for falls, we tested the hypotheses that maximum knee and hip extension speed, strength, and power capacities would significantly correlate with Maximum Step Length and also that the "step out and back" Maximum Step Length [Medell, J.L., Alexander, N.B., 2000. A clinical measure of maximal and rapid stepping in older women. J. Gerontol. A Biol. Sci. Med. Sci. 55, M429-M433.] would also correlate with the Maximum Step Length of its two sub-tasks: stepping "out only" and stepping "back only". These sub-tasks will be referred to as versions of Maximum Step Length. Unimpaired younger (N=11, age=24[3]years) and older (N=10, age=73[5]years) women performed the above three versions of Maximum Step Length. Knee and hip extension speed, strength, and power capacities were determined on a separate day and regressed on Maximum Step Length and age group. Version and practice effects were quantified and subjective impressions of test difficulty recorded. Hypotheses were tested using linear regressions, analysis of variance, and Fisher's exact test. Maximum Step Length explained 6-22% additional variance in knee and hip extension speed, strength, and power capacities after controlling for age group. Within- and between-block and test-retest correlation values were high (>0.9) for all test versions. Shorter Maximum Step Lengths are associated with reduced knee and hip extension speed, strength, and power capacities after controlling for age. A single out-and-back step of maximal length is a feasible, rapid screening measure that may provide insight into underlying functional impairment, regardless of age.
A novel MALDI–TOF based methodology for genotyping single nucleotide polymorphisms
Blondal, Thorarinn; Waage, Benedikt G.; Smarason, Sigurdur V.; Jonsson, Frosti; Fjalldal, Sigridur B.; Stefansson, Kari; Gulcher, Jeffery; Smith, Albert V.
2003-01-01
A new MALDI–TOF based detection assay was developed for analysis of single nucleotide polymorphisms (SNPs). It is a significant modification on the classic three-step minisequencing method, which includes a polymerase chain reaction (PCR), removal of excess nucleotides and primers, followed by primer extension in the presence of dideoxynucleotides using modified thermostable DNA polymerase. The key feature of this novel assay is reliance upon deoxynucleotide mixes, lacking one of the nucleotides at the polymorphic position. During primer extension in the presence of depleted nucleotide mixes, standard thermostable DNA polymerases dissociate from the template at positions requiring a depleted nucleotide; this principal was harnessed to create a genotyping assay. The assay design requires a primer- extension primer having its 3′-end one nucleotide upstream from the interrogated site. The assay further utilizes the same DNA polymerase in both PCR and the primer extension step. This not only simplifies the assay but also greatly reduces the cost per genotype compared to minisequencing methodology. We demonstrate accurate genotyping using this methodology for two SNPs run in both singleplex and duplex reactions. We term this assay nucleotide depletion genotyping (NUDGE). Nucleotide depletion genotyping could be extended to other genotyping assays based on primer extension such as detection by gel or capillary electrophoresis. PMID:14654708
Do lightning positive leaders really "step"?
NASA Astrophysics Data System (ADS)
Petersen, D.
2015-12-01
It has been known for some time that positive leaders exhibit impulsive charge motion and optical emissions as they extend. However, laboratory and field observations have not produced any evidence of a process analogous to the space leader mechanism of negative leader extension. Instead, observations have suggested that the positive leader tip undergoes a continuous to intermittent series of corona streamer bursts, each burst resulting in a small forward extension of the positive leader channel. Traditionally, it has been held that lightning positive leaders extend in a continuous or quasi-continuous fashion. Lately, however, many have become concerned that this position is incongruous with observations of impulsive activity during lightning positive leader extension. It is increasingly suggested that this impulsive activity is evidence that positive leaders also undergo "stepping". There are two issues that must be addressed. The first issue concerns whether or not the physical processes underlying impulsive extension in negative and positive leaders are distinct. We argue that these processes are in fact physically distinct, and offer new high-speed video evidence to support this position. The second issue regards the proper use of the term "step" as an identifier for the impulsive forward extension of a leader. Traditional use of this term has been applied only to negative leaders, due primarily to their stronger impulsive charge motions and photographic evidence of clearly discontinuous forward progression of the luminous channel. Recently, due to the increasing understanding of the distinct "space leader" process of negative leader extension, the term "step" has increasingly come to be associated with the space leader process itself. Should this emerging association, "step" = space leader attachment, be canonized? If not, then it seems reasonable to use the term "step" to describe impulsive positive leader extension. If, however, we do wish to associate the term "step" with space leader attachment, a process unique to negative leaders, should we devise a term for those process(es) that underly impulsive positive leader extension?
Evaluation of an omental pedicle extension technique in the dog.
Ross, W E; Pardo, A D
1993-01-01
A two-step omental pedicle extension technique was performed on 10 dogs. Step 1 of the pedicle extension involved release of the dorsal leaf of the omentum from its pancreatic attachment, whereas step 2 consisted of an inverse L-shaped incision to double the length of the pedicle. The pedicle dimensions were measured and the distance reached when extended toward the hind limb, forelimb, and the muzzle recorded after each stage of the procedure. The vascular patency of the pedicle was determined by intravenous injection of fluorescein dye after the second stage of omental extension. Mean pedicle lengths were 44.5 cm with the first stage of extension and 82.0 cm after full extension. The mean width at the caudal extent of the pedicles after dorsal and full extension was 30.4 cm and 17.2 cm, respectively. Eight of the 10 pedicles were patent after full extension. The fully extended omental pedicles reached and, in most cases, extended beyond the distal extremities and the muzzle. The findings in this study suggest that the canine omentum can be extended to any part of the body without being detached from its vascular supply.
Utilizing Evaluation To Develop a Marketing Strategy in the Louisiana Cooperative Extension Service.
ERIC Educational Resources Information Center
Coreil, Paul D.; Verma, Satish
Marketing has become a popular strategic initiative among state extension services to meet the growing demand for program accountability. The Louisiana Cooperative Extension Service (LCES) began a formative evaluation of its marketing efforts as a step toward a comprehensive marketing plan. All extension faculty were surveyed to determine their…
Extension of a suspended soap film: a homogeneous dilatation followed by new film extraction.
Seiwert, Jacopo; Monloubou, Martin; Dollet, Benjamin; Cantat, Isabelle
2013-08-30
Liquid foams are widely used in industry for their high effective viscosity, whose local origin is still unclear. This Letter presents new results on the extension of a suspended soap film, in a configuration mimicking the elementary deformation occurring during foam shearing. We evidence a surprising two-step evolution: the film first extends homogeneously, then its extension stops, and a new thicker film is extracted from the meniscus. The second step is independent of the nature of the surfactant solution, whereas the initial extension is only observed for surfactant solutions with negligible dilatational moduli. We predict this complex behavior using a model based on Frankel's theory and on interface rigidification induced by confinement.
Data processing has major impact on the outcome of quantitative label-free LC-MS analysis.
Chawade, Aakash; Sandin, Marianne; Teleman, Johan; Malmström, Johan; Levander, Fredrik
2015-02-06
High-throughput multiplexed protein quantification using mass spectrometry is steadily increasing in popularity, with the two major techniques being data-dependent acquisition (DDA) and targeted acquisition using selected reaction monitoring (SRM). However, both techniques involve extensive data processing, which can be performed by a multitude of different software solutions. Analysis of quantitative LC-MS/MS data is mainly performed in three major steps: processing of raw data, normalization, and statistical analysis. To evaluate the impact of data processing steps, we developed two new benchmark data sets, one each for DDA and SRM, with samples consisting of a long-range dilution series of synthetic peptides spiked in a total cell protein digest. The generated data were processed by eight different software workflows and three postprocessing steps. The results show that the choice of the raw data processing software and the postprocessing steps play an important role in the final outcome. Also, the linear dynamic range of the DDA data could be extended by an order of magnitude through feature alignment and a charge state merging algorithm proposed here. Furthermore, the benchmark data sets are made publicly available for further benchmarking and software developments.
NASA Astrophysics Data System (ADS)
Sethuram, D.; Srisailam, Shravani; Rao Ponangi, Babu
2018-04-01
Austempered Ductile Iron(ADI) is an exciting alloy of iron which offers the design engineers the best combination high strength-to-weight ratio, low cost design flexibility, good toughness, wear resistance along with fatigue strength. The two step austempering procedure helps in simultaneously improving the tensile strength as-well as the ductility to more than that of the conventional austempering process. Extensive literature survey reveals that it’s mechanical and wear behaviour are dependent on heat treatment and alloy additions. Current work focuses on characterizing the two-step ADI samples (TSADI) developed by novel heat treatment process for resistance to corrosion and wear. The samples of Ductile Iron were austempered by the two-Step Austempering process at temperatures 300°C to 450°C in the steps of 50°C.Temperaturesare gradually increased at the rate of 14°C/Hour. In acidic medium (H2SO4), the austempered samples showed better corrosive resistance compared to conventional ductile iron. It has been observed from the wear studies that TSADI sample at 350°C is showing better wear resistance compared to ductile iron. The results are discussed in terms of fractographs, process variables and microstructural features of TSADI samples.
Extension of a Suspended Soap Film: A Homogeneous Dilatation Followed by New Film Extraction
NASA Astrophysics Data System (ADS)
Seiwert, Jacopo; Monloubou, Martin; Dollet, Benjamin; Cantat, Isabelle
2013-08-01
Liquid foams are widely used in industry for their high effective viscosity, whose local origin is still unclear. This Letter presents new results on the extension of a suspended soap film, in a configuration mimicking the elementary deformation occurring during foam shearing. We evidence a surprising two-step evolution: the film first extends homogeneously, then its extension stops, and a new thicker film is extracted from the meniscus. The second step is independent of the nature of the surfactant solution, whereas the initial extension is only observed for surfactant solutions with negligible dilatational moduli. We predict this complex behavior using a model based on Frankel’s theory and on interface rigidification induced by confinement.
Proton irradiation of [18O]O2: production of [18F]F2 and [18F]F2 + [18F] OF2.
Bishop, A; Satyamurthy, N; Bida, G; Hendry, G; Phelps, M; Barrio, J R
1996-04-01
The production of 18F electrophilic reagents via the 18O(p,n)18F reaction has been investigated in small-volume target bodies made of aluminum, copper, gold-plated copper and nickel, having straight or conical bore shapes. Three irradiation protocols-single-step, two-step and modified two-step-were used for the recovery of the 18F activity. The single-step irradiation protocol was tested in all the target bodies. Based on the single-step performance, aluminum targets were utilized extensively in the investigation of the two-step and modified two-step irradiation protocols. With an 11-MeV cyclotron and using the two-step irradiation protocol, > 1Ci [18F]F2 was recovered reproducibly from an aluminum target body. Probable radical mechanisms for the formation of OF2 and FONO2 (fluorine nitrate) in the single-step and modified two-step targets are proposed based on the amount of ozone generated and the nitrogen impurity present in the target gases, respectively.
CIDOC-CRM extensions for conservation processes: A methodological approach
NASA Astrophysics Data System (ADS)
Vassilakaki, Evgenia; Zervos, Spiros; Giannakopoulos, Georgios
2015-02-01
This paper aims to report the steps taken to create the CIDOC Conceptual Reference Model (CIDOC-CRM) extensions and the relationships established to accommodate the depiction of conservation processes. In particular, the specific steps undertaken for developing and applying the CIDOC-CRM extensions for defining the conservation interventions performed on the cultural artifacts of the National Archaeological Museum of Athens, Greece are presented in detail. A report on the preliminary design of the DOC-CULTURE project (Development of an integrated information environment for assessment and documentation of conservation interventions to cultural works/objects with nondestructive testing techniques [NDTs], www.ndt-lab.gr/docculture), co-financed by the European Union NSRF THALES program, can be found in Kyriaki-Manessi, Zervos & Giannakopoulos (1) whereas the NDT&E methods and their output data through CIDOC-CRM extension of the DOC-CULTURE project approach to standardize the documentation of the conservation were further reported in Kouis et al. (2).
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models
Miller, Craig R.; Joyce, Paul; Wichman, Holly A.
2011-01-01
Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559
Control Software for Piezo Stepping Actuators
NASA Technical Reports Server (NTRS)
Shields, Joel F.
2013-01-01
A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.
Heralded entangling quantum gate via cavity-assisted photon scattering
NASA Astrophysics Data System (ADS)
Borges, Halyne S.; Rossatto, Daniel Z.; Luiz, Fabrício S.; Villas-Boas, Celso J.
2018-01-01
We theoretically investigate the generation of heralded entanglement between two identical atoms via cavity-assisted photon scattering in two different configurations, namely, either both atoms confined in the same cavity or trapped into locally separated ones. Our protocols are given by a very simple and elegant single-step process, the key mechanism of which is a controlled-phase-flip gate implemented by impinging a single photon on single-sided cavities. In particular, when the atoms are localized in remote cavities, we introduce a single-step parallel quantum circuit instead of the serial process extensively adopted in the literature. We also show that such parallel circuit can be straightforwardly applied to entangle two macroscopic clouds of atoms. Both protocols proposed here predict a high entanglement degree with a success probability close to unity for state-of-the-art parameters. Among other applications, our proposal and its extension to multiple atom-cavity systems step toward a suitable route for quantum networking, in particular for quantum state transfer, quantum teleportation, and nonlocal quantum memory.
The Role of Evaluation in Determining the Public Value of Extension
ERIC Educational Resources Information Center
Franz, Nancy; Arnold, Mary; Baughman, Sarah
2014-01-01
Extension has developed a strong evaluation culture across the system for the last 15 years. Yet measures are still limited to the private value of programs, looking at problems in a linear way and at isolated efforts. Across the country, Extension evaluators and administrators need to step up to help answer the "so what?" question about…
N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.
2016-01-01
Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638
Two-step chlorination: A new approach to disinfection of a primary sewage effluent.
Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan
2017-01-01
Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biagi, C J; Uman, M A
2011-12-13
There are relatively few reports in the literature focusing on negative laboratory leaders. Most of the reports focus exclusively on the simpler positive laboratory leader that is more commonly encountered in high voltage engineering [Gorin et al., 1976; Les Renardieres Group, 1977; Gallimberti, 1979; Domens et al., 1994; Bazelyan and Raizer 1998]. The physics of the long, negative leader and its positive counterpart are similar; the two differ primarily in their extension mechanisms [Bazelyan and Raizer, 1998]. Long negative sparks extend primarily by an intermittent process termed a 'step' that requires the development of secondary leader channels separated in spacemore » from the primary leader channel. Long positive sparks typically extend continuously, although, under proper conditions, their extension can be temporarily halted and begun again, and this is sometimes viewed as a stepping process. However, it is emphasized that the nature of positive leader stepping is not like that of negative leader stepping. There are several key observational studies of the propagation of long, negative-polarity laboratory sparks in air that have aided in the understanding of the stepping mechanisms exhibited by such sparks [e.g., Gorin et al., 1976; Les Renardieres Group, 1981; Ortega et al., 1994; Reess et al., 1995; Bazelyan and Raizer, 1998; Gallimberti et al., 2002]. These reports are reviewed below in Section 2, with emphasis placed on the stepping mechanism (the space stem, pilot, and space leader). Then, in Section 3, reports pertaining to modeling of long negative leaders are summarized.« less
Multigrid calculation of three-dimensional turbomachinery flows
NASA Technical Reports Server (NTRS)
Caughey, David A.
1989-01-01
Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.
Ravesteijn, Wim; Liu, Yi; Yan, Ping
2015-01-01
The paper outlines and specifies 'responsible port innovation', introducing the development of a methodological and procedural step-by-step plan for the implementation and evaluation of (responsible) innovations. Subsequently, it uses this as a guideline for the analysis and evaluation of two case-studies. The construction of the Rotterdam Maasvlakte 2 Port meets most of the formulated requirements, though making values more explicit and treating it as a process right from the start could have benefitted the project. The Dalian Dayao Port could improve its decision-making procedures in several respects, including the introduction of new methods to handle value tensions. Both projects show that public support is crucial in responsible port innovation and that it should be not only a multi-faceted but also a multi-level strategy.
A computational method for sharp interface advection
Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619
Synthesis of Triamino Acid Building Blocks with Different Lipophilicities
Maity, Jyotirmoy; Honcharenko, Dmytro; Strömberg, Roger
2015-01-01
To obtain different amino acids with varying lipophilicity and that can carry up to three positive charges we have developed a number of new triamino acid building blocks. One set of building blocks was achieved by aminoethyl extension, via reductive amination, of the side chain of ortnithine, diaminopropanoic and diaminobutanoic acid. A second set of triamino acids with the aminoethyl extension having hydrocarbon side chains was synthesized from diaminobutanoic acid. The aldehydes needed for the extension by reductive amination were synthesized from the corresponding Fmoc-L-2-amino fatty acids in two steps. Reductive amination of these compounds with Boc-L-Dab-OH gave the C4-C8 alkyl-branched triamino acids. All triamino acids were subsequently Boc-protected at the formed secondary amine to make the monomers appropriate for the N-terminus position when performing Fmoc-based solid-phase peptide synthesis. PMID:25876040
Davis, Tyler A.
2012-01-01
The first highly diastereo- and enantioselective additions of aryl nitromethane pronucleophiles to aryl aldimines are described. Identification of an electron rich chiral Bis(Amidine) catalyst for this aza-Henry variant was key to this development, leading ultimately to differentially protected cis-stilbene diamines in two steps. This method then became the lynchpin for an enantioselective synthesis of (–)-Nutlin-3 (Hoffmann-LaRoche), a potent cis-imidazoline small molecule inhibitor of p53-MDM2 used extensively as a probe of cell biology and currently in drug development. PMID:22708054
DOT National Transportation Integrated Search
2007-10-01
The goal of Selective Traffic Enforcement Programs (STEPs) is to induce motorists to drive safely. To achieve this goal, the STEP model combines intensive enforcement of a specific traffic safety law with extensive communication, education, and outre...
NASA Astrophysics Data System (ADS)
Mollica, N. R.; Guo, W.; Cohen, A. L.; Huang, K. F.; Foster, G. L.; Donald, H.; Solow, A.
2017-12-01
Carbonate skeletons of scleractinian corals are important archives of ocean climate and environmental change. However, corals don't accrete their skeletons directly from ambient seawater, but from a calcifying fluid whose composition is strongly regulated. There is mounting evidence that the carbonate chemistry of this calcifying fluid significantly impacts the amount of carbonate the coral can precipitate, which in turn affects the geochemical composition of the skeleton produced. However the mechanistic link between calcifying fluid (cf) chemistry, particularly the up-regulation of pHcf and thereby aragonite saturation state (Ωcf), and coral calcification is not well understood. We explored this link by combining boron isotope measurements with in situ measurements of seawater temperature, salinity, and DIC to estimate Ωcf of nine Porites corals from four Pacific reefs. Associated calcification rates were quantified for each core via CT scanning. We do not observe a relationship between calcification rates and Ωcf or Ωsw. Instead, when we deconvolve calcification into linear extension and skeletal density, a significant correlation is observed between density and Ωcf, and also Ωsw while extension does not correlate with either. These observations are consistent with the two-step model of coral calcification, in which skeleton is secreted in two distinct phases: vertical extension creating new skeletal elements, followed by lateral thickening of existing elements that are covered by living tissue. We developed a numerical model of Porites skeletal growth that builds on this two-step model and links skeletal density with the external seawater environment via its influence on the chemistry of coral calcifying fluid. We validated the model using existing coral skeletal datasets from six Porites species collected across five reef sites, and quantified the effects of each seawater parameter (e.g. temperature, pH, DIC) on skeletal density. Our findings illustrate the sensitivity of the second phase of coral calcification to the carbonate chemistry of the calcifying fluid, and support previous coral proxy system modelling efforts by validating the two-step growth model on annual and seasonal scales.
Birmpa, Angeliki; Kalogeropoulos, Konstantinos; Kokkinos, Petros
2015-01-01
In the present study, the effectiveness of two loop-mediated isothermal amplification (LAMP) assays was evaluated. Samples of romaine lettuce, strawberries, cherry tomatoes, green onions and sour berries were inoculated with known dilutions (100-108 CFU/g of produce) of S. Enteritidis and L. monocytogenes. With LAMP, assay pathogens can be detected in less than 60 min. The limits of detection of S. Enteritidis and L. monocytogenes depended on the food sample tested and on the presence of enrichment step. After enrichment steps, all food samples were found positive even at low initial pathogen levels. The developed LAMP, assays, are expected to become a valuable, robust, innovative, powerful, cheap and fast monitoring tool, which can be extensively used for routine analysis, and screening of contaminated foods by the food industry and the Public Food Health Authorities. PMID:27800413
NASA Astrophysics Data System (ADS)
Walker, Joel W.
2014-08-01
The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.
Desai, A; Wu, H; Sun, L; Sesterhenn, I A; Mostofi, F K; McLeod, D; Amling, C; Kusuda, L; Lance, R; Herring, J; Foley, J; Baldwin, D; Bishoff, J T; Soderdahl, D; Moul, J W
2002-01-01
The objectives of this work were to evaluate the efficacy of controlled close step-sectioned and whole-mounted radical prostatectomy specimen processing in prediction of clinical outcome as compared to the traditional processing techniques. Two-hundred and forty nine radical prostatectomy (RP) specimens were whole-mounted and close step-sectioned at caliper-measured 2.2-2.3 mm intervals. A group of 682 radical prostatectomy specimens were partially sampled as control. The RPs were performed during 1993-1999 with a mean follow-up of 29.3 months, pretreatment PSA of 0.1-40, and biopsy Gleason sums of 5-8. Disease-free survival based on biochemical or clinical recurrence and secondary intervention were computed using a Kaplan-Meier analysis. There were no significant differences in age at diagnosis, age at surgery, PSA at diagnosis, or biopsy Gleason between the two groups (P<0.05). Compared with the non-close step-sectioned group, the close step-sectioned group showed higher detection rates of extra-prostatic extension (215 (34.1%) vs, 128 (55.4%), P<0.01), and seminal vesicle invasion (50 (7.6%) vs 35 (14.7%), P<0.01). The close step-sectioned group correlated with greater 3-y disease-free survival in organ-confined (P<0.01) and specimen-confined (P<0.01) cases, over the non-uniform group. The close step-sectioned group showed significantly higher disease-free survival for cases with seminal vesicle invasion (P=0.046). No significant difference in disease-free survival was found for the positive margin group (P=0.39) between the close step-sectioned and non-uniform groups. The close step-sectioned technique correlates with increased disease-free survival rates for organ and specimen confined cases, possibly due to higher detection rates of extra-prostatic extension and seminal vesicle invasion. Close step-sectioning provides better assurance of organ-confined disease, resulting in enhanced prediction of outcome by pathological (TNM) stage.
Phase-field crystal modeling of heteroepitaxy and exotic modes of crystal nucleation
NASA Astrophysics Data System (ADS)
Podmaniczky, Frigyes; Tóth, Gyula I.; Tegze, György; Pusztai, Tamás; Gránásy, László
2017-01-01
We review recent advances made in modeling heteroepitaxy, two-step nucleation, and nucleation at the growth front within the framework of a simple dynamical density functional theory, the Phase-Field Crystal (PFC) model. The crystalline substrate is represented by spatially confined periodic potentials. We investigate the misfit dependence of the critical thickness in the StranskiKrastanov growth mode in isothermal studies. Apparently, the simulation results for stress release via the misfit dislocations fit better to the PeopleBean model than to the one by Matthews and Blakeslee. Next, we investigate structural aspects of two-step crystal nucleation at high undercoolings, where an amorphous precursor forms in the first stage. Finally, we present results for the formation of new grains at the solid-liquid interface at high supersaturations/supercoolings, a phenomenon termed Growth Front Nucleation (GFN). Results obtained with diffusive dynamics (applicable to colloids) and with a hydrodynamic extension of the PFC theory (HPFC, developed for simple liquids) will be compared. The HPFC simulations indicate two possible mechanisms for GFN.
Using an intervention mapping approach to develop a discharge protocol for intensive care patients.
van Mol, Margo; Nijkamp, Marjan; Markham, Christine; Ista, Erwin
2017-12-19
Admission into an intensive care unit (ICU) may result in long-term physical, cognitive, and emotional consequences for patients and their relatives. The care of the critically ill patient does not end upon ICU discharge; therefore, integrated and ongoing care during and after transition to the follow-up ward is pivotal. This study described the development of an intervention that responds to this need. Intervention Mapping (IM), a six-step theory- and evidence-based approach, was used to guide intervention development. The first step, a problem analysis, comprised a literature review, six semi-structured telephone interviews with former ICU-patients and their relatives, and seven qualitative roundtable meetings for all eligible nurses (i.e., 135 specialized and 105 general ward nurses). Performance and change objectives were formulated in step two. In step three, theory-based methods and practical applications were selected and directed at the desired behaviors and the identified barriers. Step four designed a revised discharge protocol taking into account existing interventions. Adoption, implementation and evaluation of the new discharge protocol (IM steps five and six) are in progress and were not included in this study. Four former ICU patients and two relatives underlined the importance of the need for effective discharge information and supportive written material. They also reported a lack of knowledge regarding the consequences of ICU admission. 42 ICU and 19 general ward nurses identified benefits and barriers regarding discharge procedures using three vignettes framed by literature. Some discrepancies were found. For example, ICU nurses were skeptical about the impact of writing a lay summary despite extensive evidence of the known benefits for the patients. ICU nurses anticipated having insufficient skills, not knowing the patient well enough, and fearing legal consequences of their writings. The intervention was designed to target the knowledge, attitudes, self-efficacy, and perceived social influence. Building upon IM steps one to three, a concept discharge protocol was developed that is relevant and feasible within current daily practice. Intervention mapping provided a comprehensive framework to improve ICU discharge by guiding the development process of a theory- and empirically-based discharge protocol that is robust and useful in practice.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-07-01
Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-01-01
Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904
Eckert, Heiner
2017-02-25
Several novel methods, catalysts and reagents have been developed to improve organic synthesis. Synergistic effects between reactions, reagents and catalysts can lead to minor heats of reaction and occur as an inherent result of multicomponent reactions (MCRs) and their extensions. They enable syntheses to be performed at a low energy level and the number of synthesis steps to be drastically reduced in comparison with 'classical' two-component reactions, fulfilling the rules of Green Chemistry . The very high potential for variability, diversity and complexity of MCRs additionally generates an extremely diverse range of products, thus bringing us closer to the aim of being able to produce tailor-made and extremely low-cost materials, drugs and compound libraries.
Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.
1990-01-01
A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.
CDC Kerala 1: Organization of clinical child development services (1987-2013).
Nair, M K C; George, Babu; Nair, G S Harikumaran; Bhaskaran, Deepa; Leena, M L; Russell, Paul Swamidhas Sudhakar
2014-12-01
The main objective of establishing the Child Development Centre (CDC), Kerala for piloting comprehensive child adolescent development program in India, has been to understand the conceptualization, design and scaling up of a pro-active positive child development initiative, easily replicable all over India. The process of establishing the Child Development Centre (CDC) Kerala for research, clinical services, training and community extension services over the last 25 y, has been as follows; Step 1: Conceptualization--The life cycle approach to child development; Step 2: Research basis--CDC model early stimulation is effective; Step 3: Development and validation of seven simple developmental screening tools; Step 4: CDC Diagnostic services--Ultrasonology and genetic, and metabolic laboratory; Step 5: Developing seven intervention packages; Step 6: Training--Post graduate diploma in clinical child development; Step 7: CDC Clinic Services--seven major ones; Step 8: CDC Community Services--Child development referral units; Step 9: Community service delivery models--Childhood disability and for adolescent care counselling projects; Step 10: National capacity building--Four child development related courses. CDC Kerala follow-up and clinic services are offered till 18 y of age and premarital counselling till 24 y of age as shown in "CDC Kerala Clinic Services Flow Chart" and 74,291 children have availed CDC clinic services in the last 10 y. CDC Kerala is the first model for comprehensive child adolescent development services using a lifecycle approach in the Government sector and hence declared as the collaborative centre for Rashtriya Bal Swasthya Karyakram (RBSK), in Kerala.
On salesmen and tourists: Two-step optimization in deterministic foragers
NASA Astrophysics Data System (ADS)
Maya, Miguel; Miramontes, Octavio; Boyer, Denis
2017-02-01
We explore a two-step optimization problem in random environments, the so-called restaurant-coffee shop problem, where a walker aims at visiting the nearest and better restaurant in an area and then move to the nearest and better coffee-shop. This is an extension of the Tourist Problem, a one-step optimization dynamics that can be viewed as a deterministic walk in a random medium. A certain amount of heterogeneity in the values of the resources to be visited causes the emergence of power-laws distributions for the steps performed by the walker, similarly to a Lévy flight. The fluctuations of the step lengths tend to decrease as a consequence of multiple-step planning, thus reducing the foraging uncertainty. We find that the first and second steps of each planned movement play very different roles in heterogeneous environments. The two-step process improves only slightly the foraging efficiency compared to the one-step optimization, at a much higher computational cost. We discuss the implications of these findings for animal and human mobility, in particular in relation to the computational effort that informed agents should deploy to solve search problems.
Landini, Fernando
2016-12-01
Psychology has great potential for contributing to rural development, particularly through supporting rural extension (RE). In this paper, the types of expectations extensionists have of psychology are identified, as well as possible ways of integrating psychosocial knowledge into the RE context. Rural extensionists from 12 Latin American countries were surveyed (n = 654). Of them, 89.4 % considered psychology could contribute to rural extension and commented on how this would be possible. Expectations were categorised and the nine mentioned by more than 20 % of them were utilized to conduct a two-steps cluster analysis. Three types of extensionists' expectations were identified: one wherein working with extensionists was highlighted; another characterised by a focus on working with farmers; and a third featuring a traditional, diffusionist extension approach, which views farmers as objects of psychologists' interventions. With the first type, psychologists should not neglect working with farmers and with the second, with extensionists. With the third type, reflecting on the expectations themselves and their underlying assumptions seems essential.
Thermal Model Development for Ares I-X
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.; DelCorso, Joe
2008-01-01
Thermal analysis for the Ares I-X vehicle has involved extensive thermal model integration, since thermal models of vehicle elements came from several different NASA and industry organizations. Many valuable lessons were learned in terms of model integration and validation. Modeling practices such as submodel, analysis group and symbol naming were standardized to facilitate the later model integration. Upfront coordination of coordinate systems, timelines, units, symbols and case scenarios was very helpful in minimizing integration rework. A process for model integration was developed that included pre-integration runs and basic checks of both models, and a step-by-step process to efficiently integrate one model into another. Extensive use of model logic was used to create scenarios and timelines for avionics and air flow activation. Efficient methods of model restart between case scenarios were developed. Standardization of software version and even compiler version between organizations was found to be essential. An automated method for applying aeroheating to the full integrated vehicle model, including submodels developed by other organizations, was developed.
NASA Astrophysics Data System (ADS)
Yao, Jianzhuang; Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo
2016-02-01
Extensive computational modeling and simulations have been carried out, in the present study, to uncover the fundamental reaction pathway for butyrylcholinesterase (BChE)-catalyzed hydrolysis of ghrelin, demonstrating that the acylation process of BChE-catalyzed hydrolysis of ghrelin follows an unprecedented single-step reaction pathway and the single-step acylation process is rate-determining. The free energy barrier (18.8 kcal/mol) calculated for the rate-determining step is reasonably close to the experimentally-derived free energy barrier (~19.4 kcal/mol), suggesting that the obtained mechanistic insights are reasonable. The single-step reaction pathway for the acylation is remarkably different from the well-known two-step acylation reaction pathway for numerous ester hydrolysis reactions catalyzed by a serine esterase. This is the first time demonstrating that a single-step reaction pathway is possible for an ester hydrolysis reaction catalyzed by a serine esterase and, therefore, one no longer can simply assume that the acylation process must follow the well-known two-step reaction pathway.
ERIC Educational Resources Information Center
Menalled, Fabian D.; Grimberg, Bruna I.; Jones, Clain A.
2009-01-01
This study assessed needs, knowledge, and interests of agricultural professionals who were likely to enroll in an online extension course in sustainable agriculture. The objectives of the study were: to (1) describe their demographic characteristics, (2) identify their concerns and interests related to farming, (3) evaluate participants' knowledge…
Elsman, Ellen B M; Leerlooijer, Joanne N; Ter Beek, Josien; Duijzer, Geerke; Jansen, Sophia C; Hiddink, Gerrit J; Feskens, Edith J M; Haveman-Nies, Annemien
2014-10-27
Although lifestyle interventions have shown to be effective in reducing the risk for type 2 diabetes mellitus, maintenance of achieved results is difficult, as participants often experience relapse after the intervention has ended. This paper describes the systematic development of a maintenance programme for the extensive SLIMMER intervention, an existing diabetes prevention intervention for high-risk individuals, implemented in a real-life setting in the Netherlands. The maintenance programme was developed using the Intervention Mapping protocol. Programme development was informed by a literature study supplemented by various focus group discussions and feedback from implementers of the extensive SLIMMER intervention. The maintenance programme was designed to sustain a healthy diet and physical activity pattern by targeting knowledge, attitudes, subjective norms and perceived behavioural control of the SLIMMER participants. Practical applications were clustered into nine programme components, including sports clinics at local sports clubs, a concluding meeting with the physiotherapist and dietician, and a return session with the physiotherapist, dietician and physical activity group. Manuals were developed for the implementers and included a detailed time table and step-by-step instructions on how to implement the maintenance programme. The Intervention Mapping protocol provided a useful framework to systematically plan a maintenance programme for the extensive SLIMMER intervention. The study showed that planning a maintenance programme can build on existing implementation structures of the extensive programme. Future research is needed to determine to what extent the maintenance programme contributes to sustained effects in participants of lifestyle interventions.
NASA Astrophysics Data System (ADS)
Chatare, Vijay K.
My research involved in two different areas, development of novel glycosylation methodology and scope in oligosaccharide synthesis. A new scaffold for antibiotic development targeting the bacterial cell wall: Total synthesis of Albocycline and its analogs to see the mechanism of action in cell wall biosynthesis. Developed novel gem-dimethyl analogs of Fraser-Reid's NPGs from 3,3-dimethyl 4-pentenol and 2,2-dimethyl 4-pentenol. These donors are stable toward acidic and basic conditions, which makes them step-efficient when compared to other glycosylating agents. The scope and reactivity of 3,3-dimethyl 4-pentenyl glycosides of glucose, mannose, galactose, and N-acetylglucosamine have been studied extensively for oligosaccharide synthesis. The donors are readily prepared from commercial starting materials and both glycosylation and hydrolysis yields are in the synthetically useful in oligosaccharide synthesis. NSMD methodology introduced a key step in albocycline synthesis, where (-)-albocycline has great biological activity against "superbug" methicillin-resistant Staphylococcus aureus (MRSA). We hypothesize that albocycline inhibits the first committed step in bacterial cell wall biosynthesis. We have successfully completed two generation syntheses of albocycline. Vinylogous aldol on the left-handed fragment, aldehyde to get selectively up alcohol at the C-8 position using Davis-Ellman sulfinylimine chemistry and then oxidation with Davis oxaziridine to access requisite stereochemistry at C-4 alcohol followed by Horner-Wadsworth-Emmons to access seco-acid. Finally, a Keck macrolactonization reaction provided access to desired (-)-Albocycline.
Comparison of crossover and jab step start techniques for base stealing in baseball.
Miyanishi, Tomohisa; Endo, So; Nagahara, Ryu
2017-11-01
Base stealing is an important tactic for increasing the chance of scoring in baseball. This study aimed to compare the crossover step (CS) and jab step (JS) starts for base stealing start performance and to clarify the differences between CS and JS starts in terms of three-dimensional lower extremity joint kinetics. Twelve male baseball players performed CS and JS starts, during which their motion and the force they applied to the ground were simultaneously recorded using a motion-capture system and two force platforms. The results showed that the normalised average forward external power, the average forward-backward force exerted by the left leg, and the forward velocities of the whole body centre of gravity generated by both legs and the left leg were significantly higher for the JS start than for the CS start. Moreover, the positive work done by hip extension during the left leg push-off was two-times greater for the JS start than the CS start. In conclusion, this study has demonstrated that the jab step start may be the better technique for a base stealing start and that greater positive work produced by left hip extension is probably responsible for producing its larger forward ground reaction force.
Corridor N Extension Act of 2010
Rep. Critz, Mark S. [D-PA-12
2010-07-29
House - 07/30/2010 Referred to the Subcommittee on Economic Development, Public Buildings and Emergency Management. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Crecelius, Anna C; Hölscher, Dirk; Hoffmann, Thomas; Schneider, Bernd; Fischer, Thilo C; Hanke, Magda-Viola; Flachowsky, Henryk; Schwab, Wilfried; Schubert, Ulrich S
2017-05-03
Flavonoids are important metabolites in strawberries (Fragaria × ananassa) because they accomplish an extensive collection of physiological functions and are valuable for human health. However, their localization within the fruit tissue has not been extensively explored. Matrix-assisted laser desorption/ionization mass spectrometric imaging (MALDI-MSI) was employed to shed light on the spatial distribution of flavonoids during fruit development. One wild-type (WT) and two transgenic lines were compared, wherein the transgenic enzymes anthocyanidin reductase (ANRi) and flavonol synthase (FLSi), respectively, were down-regulated using an RNAi-based silencing approach. In most cases, fruit development led to a reduction of the investigated flavonoids in the fruit tissue; as a consequence, they were exclusively present in the skin of mature red fruits. In the case of (epi)catechin dimer, both the ANRi and the WT phenotypes revealed low levels in mature red fruits, whereas the ANRi line bore the lowest relative concentration, as analyzed by liquid chromatography-electrospray ionization multiple-step mass spectrometry (LC-ESI-MS n ).
Differential equation models for sharp threshold dynamics.
Schramm, Harrison C; Dimitrov, Nedialko B
2014-01-01
We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
Analysis, design, fabrication, and performance of three-dimensional braided composites
NASA Astrophysics Data System (ADS)
Kostar, Timothy D.
1998-11-01
Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.
On the development of a methodology for extensive in-situ and continuous atmospheric CO2 monitoring
NASA Astrophysics Data System (ADS)
Wang, K.; Chang, S.; Jhang, T.
2010-12-01
Carbon dioxide is recognized as the dominating greenhouse gas contributing to anthropogenic global warming. Stringent controls on carbon dioxide emissions are viewed as necessary steps in controlling atmospheric carbon dioxide concentrations. From the view point of policy making, regulation of carbon dioxide emissions and its monitoring are keys to the success of stringent controls on carbon dioxide emissions. Especially, extensive atmospheric CO2 monitoring is a crucial step to ensure that CO2 emission control strategies are closely followed. In this work we develop a methodology that enables reliable and accurate in-situ and continuous atmospheric CO2 monitoring for policy making. The methodology comprises the use of gas filter correlation (GFC) instrument for in-situ CO2 monitoring, the use of CO2 working standards accompanying the continuous measurements, and the use of NOAA WMO CO2 standard gases for calibrating the working standards. The use of GFC instruments enables 1-second data sampling frequency with the interference of water vapor removed from added dryer. The CO2 measurements are conducted in the following timed and cycled manner: zero CO2 measurement, two standard CO2 gases measurements, and ambient air measurements. The standard CO2 gases are calibrated again NOAA WMO CO2 standards. The methodology is used in indoor CO2 measurements in a commercial office (about 120 people working inside), ambient CO2 measurements, and installed in a fleet of in-service commercial cargo ships for monitoring CO2 over global marine boundary layer. These measurements demonstrate our method is reliable, accurate, and traceable to NOAA WMO CO2 standards. The portability of the instrument and the working standards make the method readily applied for large-scale and extensive CO2 measurements.
Continuum modeling of three-dimensional truss-like space structures
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Hefzy, M. S.
1978-01-01
A mathematical and computational analysis capability has been developed for calculating the effective mechanical properties of three-dimensional periodic truss-like structures. Two models are studied in detail. The first, called the octetruss model, is a three-dimensional extension of a two-dimensional model, and the second is a cubic model. Symmetry considerations are employed as a first step to show that the specific octetruss model has four independent constants and that the cubic model has two. The actual values of these constants are determined by averaging the contributions of each rod element to the overall structure stiffness. The individual rod member contribution to the overall stiffness is obtained by a three-dimensional coordinate transformation. The analysis shows that the effective three-dimensional elastic properties of both models are relatively close to each other.
The effects of age and step length on joint kinematics and kinetics of large out-and-back steps.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2008-06-01
Maximum step length (MSL) is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining maximum step length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Maximum step length was 40% greater in the younger than in the older women (P<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09-0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step liftoff (P=0.03). Maximum step length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall.
The effects of age and step length on joint kinematics and kinetics of large out-and-back steps
Schulz, Brian W.; Ashton-Miller, James A.; Alexander, Neil B.
2008-01-01
Background Maximum Step Length is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Methods Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining Maximum Step Length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Findings Maximum Step Length was 40% greater in the younger than in the older women (p<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09–0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step lift off (p=0.03). Interpretation Maximum Step Length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall. PMID:18308435
A new mathematical model of bacterial interactions in two-species oral biofilms
Martin, Bénédicte; Tamanai-Shacoori, Zohreh; Bronsard, Julie; Ginguené, Franck; Meuric, Vincent
2017-01-01
Periodontitis are bacterial inflammatory diseases, where the bacterial biofilms present on the tooth-supporting tissues switch from a healthy state towards a pathogenic state. Among bacterial species involved in the disease, Porphyromonas gingivalis has been shown to induce dysbiosis, and to induce virulence of otherwise healthy bacteria like Streptococcus gordonii. During biofilm development, primary colonizers such as S. gordonii first attach to the surface and allow the subsequent adhesion of periodontal pathogens such as P. gingivalis. Interactions between those two bacteria have been extensively studied during the adhesion step of the biofilm. The aim of the study was to understand interactions of both species during the growing phase of the biofilm, for which little knowledge is available, using a mathematical model. This two-species biofilm model was based on a substrate-dependent growth, implemented with damage parameters, and validated thanks to data obtained on experimental biofilms. Three different hypothesis of interactions were proposed and assayed using this model: independence, competition between both bacteria species, or induction of toxicity by one species for the other species. Adequacy between experimental and simulated biofilms were found with the last hypothetic mathematical model. This new mathematical model of two species bacteria biofilms, dependent on different substrates for growing, can be applied to any bacteria species, environmental conditions, or steps of biofilm development. It will be of great interest for exploring bacterial interactions in biofilm conditions. PMID:28253369
Strong first order EWPT & strong gravitational waves in Z 3-symmetric singlet scalar extension
NASA Astrophysics Data System (ADS)
Kang, Zhaofeng; Ko, P.; Matsui, Toshinori
2018-02-01
The nature of electroweak (EW) phase transition (PT) is of great importance. It may give a clue to the origin of baryon asymmetry if EWPT is strong first order. Although it is a cross over within the standard model (SM), a great many extensions of the SM are capable of altering the nature. Thus, gravitational wave (GW), which is supposed to be relics of strong first order PT, is a good complementary probe to new physics beyond SM (BSM). We in this paper elaborate the patterns of strong first order EWPT in the next to simplest extension to the SM Higgs sector, by introducing a Z 3-symmetric singlet scalar. We find that, in the Z 3-symmetric limit, the tree level barrier could lead to strong first order EWPT either via three or two-step PT. Moreover, they could produce two sources of GW, despite of the undetectability from the first-step strong first order PT for the near future GW experiments. But the other source with significant supercooling which then gives rise to α ˜ O(0.1) almost can be wholly covered by future space-based GW interferometers such as eLISA, DECIGO and BBO.
Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.
Galinski, Daniel; Sapin, Julien; Dehez, Bruno
2013-06-01
This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.
Simulation of solar array slewing of Indian remote sensing satellite
NASA Astrophysics Data System (ADS)
Maharana, P. K.; Goel, P. S.
The effect of flexible arrays on sun tracking for the IRS satellite is studied. Equations of motion of satellites carrying a rotating flexible appendage are developed following the Newton-Euler approach and utilizing the constrained modes of the appendage. The drive torque, detent torque and friction torque in the SADA are included in the model. Extensive simulations of the slewing motion are carried out. The phenomena of back-stepping, step-missing, step-slipping and the influences of array flexibility in the acquisition mode are observed for certain combinations of parameters.
Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone
Qian, Jiuchao; Pei, Ling; Ma, Jiabin; Ying, Rendong; Liu, Peilin
2015-01-01
The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts. PMID:25738763
A graph-based approach for designing extensible pipelines
2012-01-01
Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675
Emissions from Open Burning of Simulated Military Waste from Forward Operating Bases
Emissions from open burning of simulated military waste from forward operating bases (FOBs) were extensively characterized as an initial step in assessing potential inhalation exposure of FOB personnel and future disposal alternatives. Emissions from two different burning scenar...
Hwang Fu, Yu-Hsien; Huang, William Y C; Shen, Kuang; Groves, Jay T; Miller, Thomas; Shan, Shu-Ou
2017-07-28
The signal recognition particle (SRP) delivers ~30% of the proteome to the eukaryotic endoplasmic reticulum, or the bacterial plasma membrane. The precise mechanism by which the bacterial SRP receptor, FtsY, interacts with and is regulated at the target membrane remain unclear. Here, quantitative analysis of FtsY-lipid interactions at single-molecule resolution revealed a two-step mechanism in which FtsY initially contacts membrane via a Dynamic mode, followed by an SRP-induced conformational transition to a Stable mode that activates FtsY for downstream steps. Importantly, mutational analyses revealed extensive auto-inhibitory mechanisms that prevent free FtsY from engaging membrane in the Stable mode; an engineered FtsY pre-organized into the Stable mode led to indiscriminate targeting in vitro and disrupted FtsY function in vivo. Our results show that the two-step lipid-binding mechanism uncouples the membrane association of FtsY from its conformational activation, thus optimizing the balance between the efficiency and fidelity of co-translational protein targeting.
Wierenga, Debbie; Engbers, Luuk H; van Empelen, Pepijn; Hildebrandt, Vincent H; van Mechelen, Willem
2012-08-07
Worksite health promotion programs (WHPPs) offer an attractive opportunity to improve the lifestyle of employees. Nevertheless, broad scale and successful implementation of WHPPs in daily practice often fails. In the present study, called BRAVO@Work, a 7-step implementation strategy was used to develop, implement and embed a WHPP in two different worksites with a focus on multiple lifestyle interventions.This article describes the design and framework for the formative evaluation of this 7-step strategy under real-time conditions by an embedded scientist with the purpose to gain insight into whether this this 7-step strategy is a useful and effective implementation strategy. Furthermore, we aim to gain insight into factors that either facilitate or hamper the implementation process, the quality of the implemented lifestyle interventions and the degree of adoption, implementation and continuation of these interventions. This study is a formative evaluation within two different worksites with an embedded scientist on site to continuously monitor the implementation process. Each worksite (i.e. a University of Applied Sciences and an Academic Hospital) will assign a participating faculty or a department, to implement a WHPP focusing on lifestyle interventions using the 7-step strategy. The primary focus will be to describe the natural course of development, implementation and maintenance of a WHPP by studying [a] the use and adherence to the 7-step strategy, [b] barriers and facilitators that influence the natural course of adoption, implementation and maintenance, and [c] the implementation process of the lifestyle interventions. All data will be collected using qualitative (i.e. real-time monitoring and semi-structured interviews) and quantitative methods (i.e. process evaluation questionnaires) applying data triangulation. Except for the real-time monitoring, the data collection will take place at baseline and after 6, 12 and 18 months. This is one of the few studies to extensively and continuously monitor the natural course of the implementation process of a WHPP by a formative evaluation using a mix of quantitative and qualitative methods on different organizational levels (i.e. management, project group, employees) with an embedded scientist on site. NTR2861.
Lessons Learned Developing an Extension-Based Training Program for Farm Labor Supervisors
ERIC Educational Resources Information Center
Roka, Fritz M.; Thissen, Carlene A.; Monaghan, Paul F.; Morera, Maria C.; Galindo-Gonzalez, Sebastian; Tovar-Aguilar, Jose Antonio
2017-01-01
This article outlines a four-step model for developing a training program for farm labor supervisors. The model draws on key lessons learned during the development of the University of Florida Institute of Food and Agricultural Sciences Farm Labor Supervisor Training program. The program is designed to educate farm supervisors on farm labor laws…
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Global reaction mechanism for the auto-ignition of full boiling range gasoline and kerosene fuels
NASA Astrophysics Data System (ADS)
Vandersickel, A.; Wright, Y. M.; Boulouchos, K.
2013-12-01
Compact reaction schemes capable of predicting auto-ignition are a prerequisite for the development of strategies to control and optimise homogeneous charge compression ignition (HCCI) engines. In particular for full boiling range fuels exhibiting two stage ignition a tremendous demand exists in the engine development community. The present paper therefore meticulously assesses a previous 7-step reaction scheme developed to predict auto-ignition for four hydrocarbon blends and proposes an important extension of the model constant optimisation procedure, allowing for the model to capture not only ignition delays, but also the evolutions of representative intermediates and heat release rates for a variety of full boiling range fuels. Additionally, an extensive validation of the later evolutions by means of various detailed n-heptane reaction mechanisms from literature has been presented; both for perfectly homogeneous, as well as non-premixed/stratified HCCI conditions. Finally, the models potential to simulate the auto-ignition of various full boiling range fuels is demonstrated by means of experimental shock tube data for six strongly differing fuels, containing e.g. up to 46.7% cyclo-alkanes, 20% napthalenes or complex branched aromatics such as methyl- or ethyl-napthalene. The good predictive capability observed for each of the validation cases as well as the successful parameterisation for each of the six fuels, indicate that the model could, in principle, be applied to any hydrocarbon fuel, providing suitable adjustments to the model parameters are carried out. Combined with the optimisation strategy presented, the model therefore constitutes a major step towards the inclusion of real fuel kinetics into full scale HCCI engine simulations.
Reassembling biological machinery in vitro.
Hess, Henry
2009-09-25
Inspired by the specialized glycolytic system of flagella of mammalian sperm, Mukai et al. (2009) describe the controlled immobilization of two enzymes constituting the first steps in the glycolytic pathway. Extension of this work may provide "power converters" for bionanodevices, which transduce chemical energy from glucose to ATP.
Crystallization and preliminary X-ray analysis of membrane-bound pyrophosphatases.
Kellosalo, Juho; Kajander, Tommi; Honkanen, Riina; Goldman, Adrian
2013-02-01
Membrane-bound pyrophosphatases (M-PPases) are enzymes that enhance the survival of plants, protozoans and prokaryotes in energy constraining stress conditions. These proteins use pyrophosphate, a waste product of cellular metabolism, as an energy source for sodium or proton pumping. To study the structure and function of these enzymes we have crystallized two membrane-bound pyrophosphatases recombinantly produced in Saccharomyces cerevisae: the sodium pumping enzyme of Thermotoga maritima (TmPPase) and the proton pumping enzyme of Pyrobaculum aerophilum (PaPPase). Extensive crystal optimization has allowed us to grow crystals of TmPPase that diffract to a resolution of 2.6 Å. The decisive step in this optimization was in-column detergent exchange during the two-step purification procedure. Dodecyl maltoside was used for high temperature solubilization of TmPPase and then exchanged to a series of different detergents. After extensive screening, the new detergent, octyl glucose neopentyl glycol, was found to be the optimal for TmPPase but not PaPPase.
Transfrontal orbitotomy in the dog: an adaptable three-step approach to the orbit.
Håkansson, Nils Wallin; Håkansson, Berit Wallin
2010-11-01
To describe an adaptable and extensive method for orbitotomy in the dog. An adaptable three-step technique for orbitotomy was developed and applied in nine consecutive cases. The steps are zygomatic arch resection laterally, temporalis muscle elevation medially and zygomatic process osteotomy anteriorly-dorsally. The entire orbit is accessed with excellent exposure and room for surgical manipulation. Facial nerve, lacrimal nerve and lacrimal gland function are preserved. The procedure can easily be converted into an orbital exenteration. Exposure of the orbit was excellent in all cases and anatomically correct closure was achieved. Signs of postoperative discomfort were limited, with moderate, reversible swelling in two cases and mild in seven. Wound infection or emphysema did not occur, nor did any other complication attributable to the operative procedure. Blinking ability and lacrimal function were preserved over follow-up times ranging from 1 to 4 years. Transfrontal orbitotomy in the dog offers excellent exposure and room for manipulation. Anatomically correct closure is easily accomplished, postoperative discomfort is limited and complications are mild and temporary. © 2010 American College of Veterinary Ophthalmologists.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
NASA Astrophysics Data System (ADS)
Cai, Xiaofeng; Guo, Wei; Qiu, Jing-Mei
2018-02-01
In this paper, we develop a high order semi-Lagrangian (SL) discontinuous Galerkin (DG) method for nonlinear Vlasov-Poisson (VP) simulations without operator splitting. In particular, we combine two recently developed novel techniques: one is the high order non-splitting SLDG transport method (Cai et al. (2017) [4]), and the other is the high order characteristics tracing technique proposed in Qiu and Russo (2017) [29]. The proposed method with up to third order accuracy in both space and time is locally mass conservative, free of splitting error, positivity-preserving, stable and robust for large time stepping size. The SLDG VP solver is applied to classic benchmark test problems such as Landau damping and two-stream instabilities for VP simulations. Efficiency and effectiveness of the proposed scheme is extensively tested. Tremendous CPU savings are shown by comparisons between the proposed SL DG scheme and the classical Runge-Kutta DG method.
ERIC Educational Resources Information Center
Morris, Michael L., Ed.
This manual presents a simple, step-by-step description of irrigated rice production in Sierra Leone. It is geared specifically to the role and needs of Peace Corps volunteers who, since the mid-1970s, have worked as agricultural extension agents in the Sierra Leone Ministry of Agriculture and Forestry. The manual is designed to serve both as a…
Developing tools to sustain biological diversity.
Randy Molina
2004-01-01
The Biodiversity Initiative strives to provide innovative solutions to the complex problem of managing forests for biodiversity. Although this initiative is in its beginning stages, an initial scoping meeting has already taken place and planning for the next steps is underway. The initiative is developing plans to conduct extensive scoping efforts in the management,...
The Tribolium castaneum Ortholog of Sex combs reduced Controls Dorsal Ridge Development
Shippy, Teresa D.; Rogers, Carmelle D.; Beeman, Richard W.; Brown, Susan J.; Denell, Robin E.
2006-01-01
In insects, the boundary between the embryonic head and thorax is formed by the dorsal ridge, a fused structure composed of portions of the maxillary and labial segments. However, the mechanisms that promote development of this unusual structure remain a mystery. In Drosophila, mutations in the Hox genes Sex combs reduced and Deformed have been reported to cause abnormal dorsal ridge formation, but the significance of these abnormalities is not clear. We have identified three mutant allele classes of Cephalothorax, the Tribolium castaneum (red flour beetle) ortholog of Sex combs reduced, each of which has a different effect on dorsal ridge development. By using Engrailed expression to monitor dorsal ridge development in these mutants, we demonstrate that Cephalothorax promotes the fusion and subsequent dorsolateral extension of the maxillary and labial Engrailed stripes (posterior compartments) during dorsal ridge formation. Molecular and genetic analysis of these alleles indicates that the N terminus of Cephalothorax is important for the fusion step, but is dispensable for Engrailed stripe extension. Thus, we find that specific regions of Cephalothorax are required for discrete steps in dorsal ridge formation. PMID:16849608
The development and plasticity of alveolar type 1 cells
Yang, Jun; Hernandez, Belinda J.; Martinez Alanis, Denise; Narvaez del Pilar, Odemaris; Vila-Ellis, Lisandra; Akiyama, Haruhiko; Evans, Scott E.; Ostrin, Edwin J.; Chen, Jichao
2016-01-01
Alveolar type 1 (AT1) cells cover >95% of the gas exchange surface and are extremely thin to facilitate passive gas diffusion. The development of these highly specialized cells and its coordination with the formation of the honeycomb-like alveolar structure are poorly understood. Using new marker-based stereology and single-cell imaging methods, we show that AT1 cells in the mouse lung form expansive thin cellular extensions via a non-proliferative two-step process while retaining cellular plasticity. In the flattening step, AT1 cells undergo molecular specification and remodel cell junctions while remaining connected to their epithelial neighbors. In the folding step, AT1 cells increase in size by more than 10-fold and undergo cellular morphogenesis that matches capillary and secondary septa formation, resulting in a single AT1 cell spanning multiple alveoli. Furthermore, AT1 cells are an unexpected source of VEGFA and their normal development is required for alveolar angiogenesis. Notably, a majority of AT1 cells proliferate upon ectopic SOX2 expression and undergo stage-dependent cell fate reprogramming. These results provide evidence that AT1 cells have both structural and signaling roles in alveolar maturation and can exit their terminally differentiated non-proliferative state. Our findings suggest that AT1 cells might be a new target in the pathogenesis and treatment of lung diseases associated with premature birth. PMID:26586225
NASA Astrophysics Data System (ADS)
Sakran, Shawky; Said, Said Mohamed
2018-02-01
Detailed surface geological mapping and subsurface seismic interpretation have been integrated to unravel the structural style and kinematic history of the Nubian Fault System (NFS). The NFS consists of several E-W Principal Deformation Zones (PDZs) (e.g. Kalabsha fault). Each PDZ is defined by spectacular E-W, WNW and ENE dextral strike-slip faults, NNE sinistral strike-slip faults, NE to ENE folds, and NNW normal faults. Each fault zone has typical self-similar strike-slip architecture comprising multi-scale fault segments. Several multi-scale uplifts and basins were developed at the step-over zones between parallel strike-slip fault segments as a result of local extension or contraction. The NNE faults consist of right-stepping sinistral strike-slip fault segments (e.g. Sin El Kiddab fault). The NNE sinistral faults extend for long distances ranging from 30 to 100 kms and cut one or two E-W PDZs. Two nearly perpendicular strike-slip tectonic regimes are recognized in the NFS; an inactive E-W Late Cretaceous - Early Cenozoic dextral transpression and an active NNE sinistral shear.
Extensor indicis proprius tendon transfer using shear wave elastography.
Lamouille, J; Müller, C; Aubry, S; Bensamoun, S; Raffoul, W; Durand, S
2017-06-01
The means for judging optimal tension during tendon transfers are approximate and not very quantifiable. The purpose of this study was to demonstrate the feasibility of quantitatively assessing muscular mechanical properties intraoperatively using ultrasound elastography (shear wave elastography [SWE]) during extensor indicis proprius (EIP) transfer. We report two cases of EIP transfer for post-traumatic rupture of the extensor pollicis longus muscle. Ultrasound acquisitions measured the elasticity modulus of the EIP muscle at different stages: rest, active extension, active extension against resistance, EIP section, distal passive traction of the tendon, after tendon transfer at rest and then during active extension. A preliminary analysis was conducted of the distribution of values for this modulus at the various transfer steps. Different shear wave velocity and elasticity modulus values were observed at the various transfer steps. The tension applied during the transfer seemed close to the resting tension if a traditional protocol were followed. The elasticity modulus varied by a factor of 37 between the active extension against resistance step (565.1 kPa) and after the tendon section (15.3 kPa). The elasticity modulus values were distributed in the same way for each patient. The therapeutic benefit of SWE elastography was studied for the first time in tendon transfers. Quantitative data on the elasticity modulus during this test may make it an effective means of improving intraoperative adjustments. Copyright © 2017 SFCM. Published by Elsevier Masson SAS. All rights reserved.
New tectonic data constrain the mechanisms of breakup along the Gulf of California
NASA Astrophysics Data System (ADS)
Bot, Anna; Geoffroy, Laurent; Authemayou, Christine; Graindorge, David
2014-05-01
The Gulf of California is resulting from an oblique-rift system due to the separation of the Pacific and the North American plates in the ~N110E to ~N125E trend. The age, nature and orientation of strain which ended with continental break-up and incipient oceanization at ~3.6 Ma, is largely misunderstood. It is generally proposed that early stages of extension began at around 12 Ma with strain partitioning into two components: a pure ENE directed extension in the Gulf Extensional Province (which includes Sonora and the eastern Baja California Peninsula in Mexico) and a dextral strike-slip displacement west of the Baja California Peninsula along the San Benito and Tosco-Abreojos faults. This evolution would have lasted ~5-6 Ma when a new transtensional strain regime took place. This regime, with extension trending ~N110E +/-10° , led to the final break-up and the subsequent individualization of a transform-fault system and subordoned short oceanic ridges. This two-steps interpretation has recently been challenged by authors suggesting a continuous transtensional extension from 12Ma in the trend of the PAC-NAM plates Kinematic. We question both of those models in term of timing and mode of accommodation basing ourselves on field investigations in Baja California Sur (Mexico). The volcano-sedimentary formations of the Comondù group dated 25 to 20 Ma exhibit clear examples of syn-sedimentary and syn-magmatic extensive deformations. This extension, oriented N65° E+/-15° , is proposed to initiate during the Magdalena Plate subduction. It would be related to the GOC initialization. In addition to this finding, we present tectonic and dating evidences of complex detachment-faulting tectonics varying in trend and kinematics with time and space for the development to the south of Baja California Sur. The extension associated with the early detachment-fault system trended ~N110E. From ~17 Ma to, probably, ~7-8 Ma, this extension controlled the early development of the San Jose del Cabo and the coeval footwall exhumation of large Cretaceous basement blocks (such as the Sierra Laguna). This detachment tectonics is overprinted by a more recent detachment-type tectonic evolution, localized alongshore the GOC, with coeval development of Pliocene basins. At this stage, extension was trending N75E +/-10° , i.e. close to GOC-normal. We discuss the geodynamical interpretation of all those new results in terms of forces driving the obliquity of rifts.
Ogawa, Hiroyasu; Hatano, Sonoko; Sugiura, Nobuo; Nagai, Naoko; Sato, Takashi; Shimizu, Katsuji; Kimata, Koji; Narimatsu, Hisashi; Watanabe, Hideto
2012-01-01
Chondroitin sulfate (CS) is a linear polysaccharide consisting of repeating disaccharide units of N-acetyl-D-galactosamine and D-glucuronic acid residues, modified with sulfated residues at various positions. Based on its structural diversity in chain length and sulfation patterns, CS provides specific biological functions in cell adhesion, morphogenesis, neural network formation, and cell division. To date, six glycosyltransferases are known to be involved in the biosynthesis of chondroitin saccharide chains, and a hetero-oligomer complex of chondroitin sulfate synthase-1 (CSS1)/chondroitin synthase-1 and chondroitin sulfate synthase-2 (CSS2)/chondroitin polymerizing factor is known to have the strongest polymerizing activity. Here, we generated and analyzed CSS2(-/-) mice. Although they were viable and fertile, exhibiting no overt morphological abnormalities or osteoarthritis, their cartilage contained CS chains with a shorter length and at a similar number to wild type. Further analysis using CSS2(-/-) chondrocyte culture systems, together with siRNA of CSS1, revealed the presence of two CS chain species in length, suggesting two steps of CS chain polymerization; i.e., elongation from the linkage region up to Mr ∼10,000, and further extension. There, CSS2 mainly participated in the extension, whereas CSS1 participated in both the extension and the initiation. Our study demonstrates the distinct function of CSS1 and CSS2, providing a clue in the elucidation of the mechanism of CS biosynthesis.
Statistical models for detecting differential chromatin interactions mediated by a protein.
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM).
Statistical Models for Detecting Differential Chromatin Interactions Mediated by a Protein
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM). PMID:24835279
Zhou, Wei; Shan, Jinjun; Meng, Minxin
2018-08-17
Fructus Gardeniae-Fructus Forsythiae herb pair is an herbal formula used extensively to treat inflammation and fever, but few systematic identification studies of the bioactive components have been reported. Herein, the unknown analogues in the first-step screening were rapidly identified from representative compounds in different structure types (geniposide as iridoid type, crocetin as crocetin type, jasminoside B as monocyclic monoterpene type, oleanolic acid as saponin type, 3-caffeoylquinic acid as organic acid type, forsythoside A as phenylethanoid type, phillyrin as lignan type and quercetin 3-rutinoside as flavonoid type) by UPLC-Q-Tof/MS combined with mass defect filtering (MDF), and further confirmed with reference standards and published literatures. Similarly, in the second step, other unknown components were rapidly discovered from the compounds identified in the first step by MDF. Using the two-step screening method, a total of 58 components were characterized in Fructus Gardeniae-Fructus Forsythiae (FG-FF) decoction. In rat's blood, 36 compounds in extract and 16 metabolites were unambiguously or tentatively identified. Besides, we found the principal metabolites were glucuronide conjugates, with the glucuronide conjugates of caffeic acid, quercetin and kaempferol confirmed as caffeic acid 3-glucuronide, quercetin 3-glucuronide and kaempferol 3-glucuronide by reference standards, respectively. Additionally, most of them bound more strongly to human serum albumin than their respective prototypes, predicted by Molecular Docking and Simulation, indicating that they had lower blood clearance in vivo and possibly more contribution to pharmacological effects. This study developed a novel two-step screening method in addressing how to comprehensively screen components in herbal medicine by UPLC-Q-Tof/MS with MDF. Copyright © 2018 Elsevier B.V. All rights reserved.
Transformational System Concepts and Technologies for Our Future in Space
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Mankins, John C.
2004-01-01
Continued constrained budgets and growing national and international interests in the commercialization and development of space requires NASA to be constantly vigilant, to be creative, and to seize every opportunity for assuring the maximum return on space infrastructure investments. Accordingly, efforts are underway to forge new and innovative approaches to transform our space systems in the future to ultimately achieve two or three or five times as much with the same resources. This bold undertaking can be achieved only through extensive cooperative efforts throughout the aerospace community and truly effective planning to pursue advanced space system design concepts and high-risk/high-leverage research and technology. Definitive implementation strategies and roadmaps containing new methodologies and revolutionary approaches must be developed to economically accommodate the continued exploration and development of space. Transformation can be realized through modular design and stepping stone development. This approach involves sustainable budget levels and multi-purpose systems development of supporting capabilities that lead to a diverse amy of sustainable future space activities. Transformational design and development requires revolutionary advances by using modular designs and a planned, stepping stone development process. A modular approach to space systems potentially offers many improvements over traditional one-of-a-kind space systems comprised of different subsystem element with little standardization in interfaces or functionality. Modular systems must be more flexible, scaleable, reconfigurable, and evolvable. Costs can be reduced through learning curve effects and economies of scale, and by enabling servicing and repair that would not otherwise be feasible. This paper briefly discusses achieving a promising approach to transforming space systems planning and evolution into a meaningful stepping stone design, development, and implementation process. The success of this well planned and orchestrated approach holds great promise for achieving innovation and revolutionary technology development for supporting future exploration and development of space.
Liquefaction of black thunder coal with counterflow reactor technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, R.J.; Simpson, P.L.
There is currently a resurgence of interest in the use of carbon monoxide and water to promote the solubilization of low rank coals in liquefaction processes. The mechanism for the water shift gas reaction (WGSR) is well documented and proceeds via a formate ion intermediate at temperatures up to about 400{degrees}C. Coal solubilization is enhanced by CO/H{sub 2}O and by the solvent effect of the supercritical water. The WGSR is catalyzed by bases (alkali metal carbonates, hydroxides, acetates, aluminates). Many inorganic salts which promote catalytic hydrogenation are rendered inactive in CO/H{sub 2}O, although there is positive evidence for the benefitmore » of using pyrite for both the WGSR and as a hydrogenation catalyst. The temperatures at which coal solubilization occurs are insufficient to promote extensive cracking or upgrading of the solubilized coal. Therefore, a two step process might achieve these two reactions sequentially. Alberta Research Council (ARC) has developed a two-stage process for the coprocessing of low rank coals and petroleum resids/bitumens. This process was further advanced by utilizing the counterflow reactor (CFR) concept pioneered by Canadian Energy Developments (CED) and ARC. The technology is currently being applied to coal liquefaction. The two-stage process employs CO/H{sub 2}O at relatively mid temperature and pressure to solubilize the coal, followed by a more severe hydrocracking step. This paper describes the results of an autoclave study conducted to support a bench unit program on the direct liquefaction of coals.« less
Cypriot Urban Elementary Students' Attitude toward Physical Education
ERIC Educational Resources Information Center
Constantinides, Panos; Silverman, Stephen
2018-01-01
Purpose: This study examined the attitudes of Cypriot elementary school students toward physical education. Fourth, fifth and sixth grade students (N = 763) from six urban Cypriot elementary schools completed an attitude instrument. Methods: Adapting the attitude instrument for Greek-speaking students an extensive two-step pilot study showed the…
NASA Astrophysics Data System (ADS)
Johnson, David; Clarke, Simon; Wiley, John; Koumoto, Kunihito
2014-06-01
Layered compounds, materials with a large anisotropy to their bonding, electrical and/or magnetic properties, have been important in the development of solid state chemistry, physics and engineering applications. Layered materials were the initial test bed where chemists developed intercalation chemistry that evolved into the field of topochemical reactions where researchers are able to perform sequential steps to arrive at kinetically stable products that cannot be directly prepared by other approaches. Physicists have used layered compounds to discover and understand novel phenomena made more apparent through reduced dimensionality. The discovery of charge and spin density waves and more recently the remarkable discovery in condensed matter physics of the two-dimensional topological insulating state were discovered in two-dimensional materials. The understanding developed in two-dimensional materials enabled subsequent extension of these and other phenomena into three-dimensional materials. Layered compounds have also been used in many technologies as engineers and scientists used their unique properties to solve challenging technical problems (low temperature ion conduction for batteries, easy shear planes for lubrication in vacuum, edge decorated catalyst sites for catalytic removal of sulfur from oil, etc). The articles that are published in this issue provide an excellent overview of the spectrum of activities that are being pursued, as well as an introduction to some of the most established achievements in the field. Clusters of papers discussing thermoelectric properties, electronic structure and transport properties, growth of single two-dimensional layers, intercalation and more extensive topochemical reactions and the interleaving of two structures to form new materials highlight the breadth of current research in this area. These papers will hopefully serve as a useful guideline for the interested reader to different important aspects in this field and an overview of current areas of research interest.
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Mitchell, Carter A.; Tucker, Alex C.; Escalante-Semerena, Jorge C.; ...
2014-12-09
The adenosine monoposphate-forming acyl-CoA synthetase enzymes catalyze a two-step reaction that involves the initial formation of an acyl adenylate that reacts in a second partial reaction to form a thioester between the acyl substrate and CoA. These enzymes utilize a Domain Alternation catalytic mechanism, whereby a ~110 residue C-terminal domain rotates by 140° to form distinct catalytic conformations for the two partial reactions. In this paper, the structure of an acetoacetyl-CoA synthetase (AacS) is presented that illustrates a novel aspect of this C-terminal domain. Specifically, several acetyl- and acetoacetyl-CoA synthetases contain a 30-residue extension on the C-terminus compared to othermore » members of this family. Finally, whereas residues from this extension are disordered in prior structures, the AacS structure shows that residues from this extension may interact with key catalytic residues from the N-terminal domain.« less
NASA Technical Reports Server (NTRS)
Patterson, G.
1973-01-01
The data processing procedures and the computer programs were developed to predict structural responses using the Impulse Transfer Function (ITF) method. There are three major steps in the process: (1) analog-to-digital (A-D) conversion of the test data to produce Phase I digital tapes (2) processing of the Phase I digital tapes to extract ITF's and storing them in a permanent data bank, and (3) predicting structural responses to a set of applied loads. The analog to digital conversion is performed by a standard package which will be described later in terms of the contents of the resulting Phase I digital tape. Two separate computer programs have been developed to perform the digital processing.
On Euler's Theorem for Homogeneous Functions and Proofs Thereof.
ERIC Educational Resources Information Center
Tykodi, R. J.
1982-01-01
Euler's theorem for homogenous functions is useful when developing thermodynamic distinction between extensive and intensive variables of state and when deriving the Gibbs-Duhem relation. Discusses Euler's theorem and thermodynamic applications. Includes six-step instructional strategy for introducing the material to students. (Author/JN)
Complete Report on the Development of Welding Parameters for Irradiated Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Greg; Sutton, Benjamin J.; Tatman, Jonathan K.
The advanced welding facility at the Radiochemical Engineering Development Center of Oak Ridge National Laboratory, which was conceived to enable research and development of weld repair techniques for nuclear power plant life extension, is now operational. The development of the facility and its advanced welding capabilities, along with the model materials for initial welding trials, were funded jointly by the U.S. Department of Energy, Office of Nuclear Energy, Light Water Reactor Sustainability Program, the Electric Power Research Institute, Long Term Operations Program and the Welding and Repair Technology Center, with additional support from Oak Ridge National Laboratory. Welding of irradiatedmore » materials was initiated on November 17, 2017, which marked a significant step in the development of the facility and the beginning of extensive welding research and development campaigns on irradiated materials that will eventually produce validated techniques and guidelines for weld repair activities carried out to extend the operational lifetimes of nuclear power plants beyond 60 years. This report summarizes the final steps that were required to complete weld process development, initial irradiated materials welding activities, near-term plans for irradiated materials welding, and plans for post-weld analyses that will be carried out to assess the ability of the advanced welding processes to make repairs on irradiated materials.« less
Design and control of the MINDWALKER exoskeleton.
Wang, Shiqian; Wang, Letian; Meijneke, Cory; van Asseldonk, Edwin; Hoellinger, Thomas; Cheron, Guy; Ivanenko, Yuri; La Scaleia, Valentina; Sylos-Labini, Francesca; Molinari, Marco; Tamburella, Federica; Pisotta, Iolanda; Thorsteinsson, Freygardur; Ilzkovitz, Michel; Gancet, Jeremi; Nevatia, Yashodhan; Hauffe, Ralf; Zanow, Frank; van der Kooij, Herman
2015-03-01
Powered exoskeletons can empower paraplegics to stand and walk. Actively controlled hip ab/adduction (HAA) is needed for weight shift and for lateral foot placement to support dynamic balance control and to counteract disturbances in the frontal plane. Here, we describe the design, control, and preliminary evaluation of a novel exoskeleton, MINDWALKER. Besides powered hip flexion/extension and knee flexion/extension, it also has powered HAA. Each of the powered joints has a series elastic actuator, which can deliver 100 Nm torque and 1 kW power. A finite-state machine based controller provides gait assistance in both the sagittal and frontal planes. State transitions, such as stepping, can be triggered by the displacement of the Center of Mass (CoM). A novel step-width adaptation algorithm was proposed to stabilize lateral balance. We tested this exoskeleton on both healthy subjects and paraplegics. Experimental results showed that all users could successfully trigger steps by CoM displacement. The step-width adaptation algorithm could actively counteract disturbances, such as pushes. With the current implementations, stable walking without crutches has been achieved for healthy subjects but not yet for SCI paraplegics. More research and development is needed to improve the gait stability.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
NASA Astrophysics Data System (ADS)
Ramos, N. T.; Sarmiento, K. J. S.; Maxwell, K. V.; Soberano, O. B.; Dimalanta, C. B.
2017-12-01
The remarkable preservation and extensive distribution of emergent marine terraces in the Philippines allow us to study relative sea level changes and tectonic processes during the Late Quaternary. While higher uplift rates and possible prehistoric coseismic events are recorded by emergent coral reefs facing subduction zones, the central Philippine islands are reported to reflect vertical tectonic stability as they are distant from trenches. To constrain the coastal tectonics of the central Philippine region, we studied emergent sea level indicators along the coasts of northern Cebu Island in Tabuelan, San Remigio, and Bogo City. Upper steps of marine terraces were interpreted from IFSAR-derived DEMs, in which at least two and seven steps were identified along the west (Tabuelan) and east (Bogo) coasts, respectively. In Tabuelan, two extensive terrace steps (TPT) were interpreted with TPT1 at 5-13 m above mean sea level (amsl) and TPT2 at 27-44 m amsl. Five to possibly seven terrace steps (BPT) were delineated in Bogo City with elevations from lowest (BPT1) to highest (BPT7) at BPT1: 4-6 m, BPT2: 12-18 m, BPT3: 27-33 m, BPT4: 39-46 m, BPT5: 59-71 m, BPT6: 80-92 m, and BPT7: 103-108 m amsl. These upper terraces are inferred to be Late Pleistocene in age based on an initial MIS 5e age reported for a 5-m-high terrace in Mactan Island. At some sites, even lower and narrower terrace surfaces were observed, consisting of cemented coral rubble that surround eroded and attached corals. These lower carbonate steps, with elevations ranging from 1 to 3 m amsl, further provide clues on relative sea level changes and long-term tectonic deformation across Cebu Island.
Beard, Sue; Campagna, David J.; Anderson, R. Ernest
2010-01-01
The Lake Mead fault system is a northeast-striking, 130-km-long zone of left-slip in the southeast Great Basin, active from before 16 Ma to Quaternary time. The northeast end of the Lake Mead fault system in the Virgin Mountains of southeast Nevada and northwest Arizona forms a partitioned strain field comprising kinematically linked northeast-striking left-lateral faults, north-striking normal faults, and northwest-striking right-lateral faults. Major faults bound large structural blocks whose internal strain reflects their position within a left step-over of the left-lateral faults. Two north-striking large-displacement normal faults, the Lakeside Mine segment of the South Virgin–White Hills detachment fault and the Piedmont fault, intersect the left step-over from the southwest and northeast, respectively. The left step-over in the Lake Mead fault system therefore corresponds to a right-step in the regional normal fault system.Within the left step-over, displacement transfer between the left-lateral faults and linked normal faults occurs near their junctions, where the left-lateral faults become oblique and normal fault displacement decreases away from the junction. Southward from the center of the step-over in the Virgin Mountains, down-to-the-west normal faults splay northward from left-lateral faults, whereas north and east of the center, down-to-the-east normal faults splay southward from left-lateral faults. Minimum slip is thus in the central part of the left step-over, between east-directed slip to the north and west-directed slip to the south. Attenuation faults parallel or subparallel to bedding cut Lower Paleozoic rocks and are inferred to be early structures that accommodated footwall uplift during the initial stages of extension.Fault-slip data indicate oblique extensional strain within the left step-over in the South Virgin Mountains, manifested as east-west extension; shortening is partitioned between vertical for extension-dominated structural blocks and south-directed for strike-slip faults. Strike-slip faults are oblique to the extension direction due to structural inheritance from NE-striking fabrics in Proterozoic crystalline basement rocks.We hypothesize that (1) during early phases of deformation oblique extension was partitioned to form east-west–extended domains bounded by left-lateral faults of the Lake Mead fault system, from ca. 16 to 14 Ma. (2) Beginning ca. 13 Ma, increased south-directed shortening impinged on the Virgin Mountains and forced uplift, faulting, and overturning along the north and west side of the Virgin Mountains. (3) By ca. 10 Ma, initiation of the younger Hen Spring to Hamblin Bay fault segment of the Lake Mead fault system accommodated westward tectonic escape, and the focus of south-directed shortening transferred to the western Lake Mead region. The shift from early partitioned oblique extension to south-directed shortening may have resulted from initiation of right-lateral shear of the eastern Walker Lane to the west coupled with left-lateral shear along the eastern margin of the Great Basin.
The Power of 2: How an Apparently Irregular Numeration System Facilitates Mental Arithmetic
ERIC Educational Resources Information Center
Bender, Andrea; Beller, Sieghard
2017-01-01
Mangarevan traditionally contained two numeration systems: a general one, which was highly regular, decimal, and extraordinarily extensive; and a specific one, which was restricted to specific objects, based on diverging counting units, and interspersed with binary steps. While most of these characteristics are shared by numeration systems in…
School Yard Gardening Reaps Harvest of Learning and Lettuce.
ERIC Educational Resources Information Center
Brasgalla, June
1989-01-01
Describes the experiences of a kindergarten class that conducted an extensive outdoor vegetable gardening project with the help of parent volunteers. The article presents seven steps to assist PTAs in establishing such a project and notes the value of school gardens in developing student skills. (SM)
Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder
2017-09-04
Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.
Henry, B I; Langlands, T A M; Wearne, S L
2006-09-01
We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.
NASA Astrophysics Data System (ADS)
Tuckerman, Mark
2006-03-01
One of the computational grand challenge problems is to develop methodology capable of sampling conformational equilibria in systems with rough energy landscapes. If met, many important problems, most notably protein folding, could be significantly impacted. In this talk, two new approaches for addressing this problem will be presented. First, it will be shown how molecular dynamics can be combined with a novel variable transformation designed to warp configuration space in such a way that barriers are reduced and attractive basins stretched. This method rigorously preserves equilibrium properties while leading to very large enhancements in sampling efficiency. Extensions of this approach to the calculation/exploration of free energy surfaces will be discussed. Next, a new very large time-step molecular dynamics method will be introduced that overcomes the resonances which plague many molecular dynamics algorithms. The performance of the methods is demonstrated on a variety of systems including liquid water, long polymer chains simple protein models, and oligopeptides.
A Specific Long-Term Plan for Management of U.S. Nuclear Spent Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Salomon
2006-07-01
A specific plan consisting of six different steps is proposed to accelerate and improve the long-term management of U.S. Light Water Reactor (LWR) spent nuclear fuel. The first step is to construct additional, centralized, engineered (dry cask) spent fuel facilities to have a backup solution to Yucca Mountain (YM) delays or lack of capacity. The second step is to restart the development of the Integral Fast Reactor (IFR), in a burner mode, because of its inherent safety characteristics and its extensive past development in contrast to Acceleration Driven Systems (ADS). The IFR and an improved non-proliferation version of its pyro-processingmore » technology can burn the plutonium (Pu) and minor actinides (MA) obtained by reprocessing LWR spent fuel. The remaining IFR and LWR fission products will be treated for storage at YM. The radiotoxicity of that high level waste (HLW) will fall below that of natural uranium in less than one thousand years. Due to anticipated increased capital, maintenance, and research costs for IFR, the third step is to reduce the required number of IFRs and their potential delays by implementing multiple recycles of Pu and Neptunium (Np) MA in LWR. That strategy is to use an advanced separation process, UREX+, and the MIX Pu option where the role and degradation of Pu is limited by uranium enrichment. UREX+ will decrease proliferation risks by avoiding Pu separation while the MIX fuel will lead to an equilibrium fuel recycle mode in LWR which will reduce U. S. Pu inventory and deliver much smaller volumes of less radioactive HLW to YM. In both steps two and three, Research and Development (R and D) is to emphasize the demonstration of multiple fuel reprocessing and fabrication, while improving HLW treatment, increasing proliferation resistance, and reducing losses of fissile material. The fourth step is to license and construct YM because it is needed for the disposal of defense wastes and the HLW to be generated under the proposed plan. The fifth step consists of developing a risk informed methodology to assess the various options available for disposition of LWR spent fuel and to select among them. The sixth step is to modify the current U. S. infrastructure and to create a climate to increase the utilization of uranium and the sustainability of nuclear generated electricity. (author)« less
Saveliev, S V; Cox, M M
1996-01-01
We provide a molecular description of key intermediates in the deletion of two internal eliminated sequences (IES elements), the M and R regions, during macronuclear development in Tetrahymena thermophila. Using a variety of PCR-based methods in vivo, double-strand breaks are detected that are generated by hydrolytic cleavage and correspond closely to the observed chromosomal junctions left behind in the macronuclei. The breaks exhibit a temporal and structural relationship to the deletion reaction that provides strong evidence that they are intermediates in the deletion pathway. Breaks in the individual strands are staggered by 4 bp, producing a four nucleotide 5' extension. Evidence is presented that breaks do not occur simultaneously at both ends. The results are most consistent with a deletion mechanism featuring initiation by double-strand cleavage at one end of the deleted element, followed by transesterification to generate the macronuclear junction on one DNA strand. An adenosine residue is found at all the nucleophilic 3' ends used in the postulated transesterification step. Evidence for the transesterification step is provided by detection of a 3' hydroxyl that would be liberated by such a step at a deletion boundary where no other DNA strand ends are detected. Images PMID:8654384
Wiechert, W; de Graaf, A A
1997-07-05
The extension of metabolite balancing with carbon labeling experiments, as described by Marx et al. (Biotechnol. Bioeng. 49: 11-29), results in a much more detailed stationary metabolic flux analysis. As opposed to basic metabolite flux balancing alone, this method enables both flux directions of bidirectional reaction steps to be quantitated. However, the mathematical treatment of carbon labeling systems is much more complicated, because it requires the solution of numerous balance equations that are bilinear with respect to fluxes and fractional labeling. In this study, a universal modeling framework is presented for describing the metabolite and carbon atom flux in a metabolic network. Bidirectional reaction steps are extensively treated and their impact on the system's labeling state is investigated. Various kinds of modeling assumptions, as usually made for metabolic fluxes, are expressed by linear constraint equations. A numerical algorithm for the solution of the resulting linear constrained set of nonlinear equations is developed. The numerical stability problems caused by large bidirectional fluxes are solved by a specially developed transformation method. Finally, the simulation of carbon labeling experiments is facilitated by a flexible software tool for network synthesis. An illustrative simulation study on flux identifiability from available flux and labeling measurements in the cyclic pentose phosphate pathway of a recombinant strain of Zymomonas mobilis concludes this contribution.
Comparison of physical activities of female football players in junior high school and high school.
Inoue, Yuri; Otani, Yoshitaka; Takemasa, Seiichi
2017-08-01
[Purpose] This study aimed to compare physical activities between junior high school and high school female football players in order to explain the factors that predispose to a higher incidence of sports injuries in high school female football players. [Subjects and Methods] Twenty-nine female football players participated. Finger floor distance, the center of pressure during single limb stance with eyes open and closed, the 40-m linear sprint time, hip abduction and extension muscle strength and isokinetic knee flexion and extension peak torque were measured. The modified Star Excursion Balance Test, the three-steps bounding test and three-steps hopping tests, agility test 1 (Step 50), agility test 2 (Forward run), curl-up test for 30 seconds and the Yo-Yo intermittent recovery test were performed. [Results] The high school group was only significantly faster than the junior high school group in the 40-m linear sprint time and in the agility tests. The distance of the bounding test in the high school group was longer than that in the junior high school group. [Conclusion] Agility and speed increase with growth; however, muscle strength and balance do not develop alongside. This unbalanced development may cause a higher incidence of sports injuries in high school football players.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
Performances of JEM-EUSO: angular reconstruction. The JEM-EUSO Collaboration
NASA Astrophysics Data System (ADS)
Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.
2015-11-01
Mounted on the International Space Station(ISS), the Extreme Universe Space Observatory, on-board the Japanese Experimental Module (JEM-EUSO), relies on the well established fluorescence technique to observe Extensive Air Showers (EAS) developing in the earth's atmosphere. Focusing on the detection of Ultra High Energy Cosmic Rays (UHECR) in the decade of 1020eV, JEM-EUSO will face new challenges by applying this technique from space. The EUSO Simulation and Analysis Framework (ESAF) has been developed in this context to provide a full end-to-end simulation frame, and assess the overall performance of the detector. Within ESAF, angular reconstruction can be separated into two conceptually different steps. The first step is pattern recognition, or filtering, of the signal to separate it from the background. The second step is to perform different types of fitting in order to search for the relevant geometrical parameters that best describe the previously selected signal. In this paper, we discuss some of the techniques we have implemented in ESAF to perform the geometrical reconstruction of EAS seen by JEM-EUSO. We also conduct thorough tests to assess the performances of these techniques in conditions which are relevant to the scope of the JEM-EUSO mission. We conclude by showing the expected angular resolution in the energy range that JEM-EUSO is expected to observe.
Population genetics of Setaria viridis, a new model system
USDA-ARS?s Scientific Manuscript database
An extensive survey of the standing genetic variation in natural populations is among the priority steps in developing a species into a model system. In recent years, green foxtail (Setaria viridis), along with its domesticated form foxtail millet (S. italica), has rapidly become a promising new mod...
The Respond/Read/Replicate/Report System.
ERIC Educational Resources Information Center
Schwartz, Rhea
An effective teaching technique for a university extension course for rural special education teachers is the respond/read/replicate/report system. The four-step system was developed to stimulate tired, beleaguered teachers with differing experiences, knowledgeability, and teaching/learning styles who drove up to 60 miles on country roads to…
Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems
NASA Astrophysics Data System (ADS)
Ferrari, Andrea C.; Bonaccorso, Francesco; Fal'Ko, Vladimir; Novoselov, Konstantin S.; Roche, Stephan; Bøggild, Peter; Borini, Stefano; Koppens, Frank H. L.; Palermo, Vincenzo; Pugno, Nicola; Garrido, José A.; Sordan, Roman; Bianco, Alberto; Ballerini, Laura; Prato, Maurizio; Lidorikis, Elefterios; Kivioja, Jani; Marinelli, Claudio; Ryhänen, Tapani; Morpurgo, Alberto; Coleman, Jonathan N.; Nicolosi, Valeria; Colombo, Luigi; Fert, Albert; Garcia-Hernandez, Mar; Bachtold, Adrian; Schneider, Grégory F.; Guinea, Francisco; Dekker, Cees; Barbone, Matteo; Sun, Zhipei; Galiotis, Costas; Grigorenko, Alexander N.; Konstantatos, Gerasimos; Kis, Andras; Katsnelson, Mikhail; Vandersypen, Lieven; Loiseau, Annick; Morandi, Vittorio; Neumaier, Daniel; Treossi, Emanuele; Pellegrini, Vittorio; Polini, Marco; Tredicucci, Alessandro; Williams, Gareth M.; Hee Hong, Byung; Ahn, Jong-Hyun; Min Kim, Jong; Zirath, Herbert; van Wees, Bart J.; van der Zant, Herre; Occhipinti, Luigi; Di Matteo, Andrea; Kinloch, Ian A.; Seyller, Thomas; Quesnel, Etienne; Feng, Xinliang; Teo, Ken; Rupesinghe, Nalin; Hakonen, Pertti; Neil, Simon R. T.; Tannock, Quentin; Löfwander, Tomas; Kinaret, Jari
2015-03-01
We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.
Cooperative expression of atomic chirality in inorganic nanostructures.
Wang, Peng-Peng; Yu, Shang-Jie; Govorov, Alexander O; Ouyang, Min
2017-02-02
Cooperative chirality phenomena extensively exist in biomolecular and organic systems via intra- and inter-molecular interactions, but study of inorganic materials has been lacking. Here we report, experimentally and theoretically, cooperative chirality in colloidal cinnabar mercury sulfide nanocrystals that originates from chirality interplay between the crystallographic lattice and geometric morphology at different length scales. A two-step synthetic scheme is developed to allow control of critical parameters of these two types of handedness, resulting in different chiral interplays expressed as observables through materials engineering. Furthermore, we adopt an electromagnetic model with the finite element method to elucidate cooperative chirality in inorganic systems, showing excellent agreement with experimental results. Our study enables an emerging class of nanostructures with tailored cooperative chirality that is vital for fundamental understanding of nanoscale chirality as well as technology applications based on new chiroptical building blocks.
Cooperative expression of atomic chirality in inorganic nanostructures
Wang, Peng-peng; Yu, Shang-Jie; Govorov, Alexander O; Ouyang, Min
2017-01-01
Cooperative chirality phenomena extensively exist in biomolecular and organic systems via intra- and inter-molecular interactions, but study of inorganic materials has been lacking. Here we report, experimentally and theoretically, cooperative chirality in colloidal cinnabar mercury sulfide nanocrystals that originates from chirality interplay between the crystallographic lattice and geometric morphology at different length scales. A two-step synthetic scheme is developed to allow control of critical parameters of these two types of handedness, resulting in different chiral interplays expressed as observables through materials engineering. Furthermore, we adopt an electromagnetic model with the finite element method to elucidate cooperative chirality in inorganic systems, showing excellent agreement with experimental results. Our study enables an emerging class of nanostructures with tailored cooperative chirality that is vital for fundamental understanding of nanoscale chirality as well as technology applications based on new chiroptical building blocks. PMID:28148957
NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs.
Morisset, Dany; Dobnik, David; Hamels, Sandrine; Zel, Jana; Gruden, Kristina
2008-10-01
We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1-25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification.
NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs
Morisset, Dany; Dobnik, David; Hamels, Sandrine; Žel, Jana; Gruden, Kristina
2008-01-01
We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1–25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification. PMID:18710880
NASA Technical Reports Server (NTRS)
Over, Ann P.
2001-01-01
The Combustion Module-1 (CM-1) was a large, state-of-the-art space shuttle Spacelab facility that was designed, built, and operated on STS-83 and STS-94 by a team from the NASA Glenn Research Center composed of civil servants and local support contractors (Analex and Zin Technologies). CM-1 accomplished the incredible task of providing a safe environment to support flammable and toxic gases while providing a suite of diagnostics for science measurements more extensive than any prior shuttle experiment (or anything since). Finally, CM-1 proved that multiple science investigations can be accommodated in one facility, a crucial step for Glenn's Fluids and Combustion Facility developed for the International Space Station. However, the story does not end with CM-1. In 1998, CM-2 was authorized to take the CM-1 accomplishments a big step further by completing three major steps: Converting the entire experiment to operate in a SPACEHAB module. Conducting an extensive hardware refurbishment and upgrading diagnostics (e.g., cameras, gas chromatograph, and numerous sensors). Adding a new, completely different combustion experiment.
Deformed shape invariance symmetry and potentials in curved space with two known eigenstates
NASA Astrophysics Data System (ADS)
Quesne, C.
2018-04-01
We consider two families of extensions of the oscillator in a d-dimensional constant-curvature space and analyze them in a deformed supersymmetric framework, wherein the starting oscillator is known to exhibit a deformed shape invariance property. We show that the first two members of each extension family are also endowed with such a property, provided some constraint conditions relating the potential parameters are satisfied, in other words they are conditionally deformed shape invariant. Since, in the second step of the construction of a partner potential hierarchy, the constraint conditions change, we impose compatibility conditions between the two sets to build potentials with known ground and first excited states. To extend such results to any members of the two families, we devise a general method wherein the first two superpotentials, the first two partner potentials, and the first two eigenstates of the starting potential are built from some generating function W+(r) [and its accompanying function W-(r)].
Micron Accuracy Deployment Experiment (MADE), phase A. Volume 1
NASA Technical Reports Server (NTRS)
Peterson, Lee D.; Lake, Mark S.
1995-01-01
This report documents a Phase A In-STEP flight experiment development effort. The objective of the experiment is to deploy a portion of a segmented reflector on the Shuttle and study its micron-level mechanics. Ground test data are presented which projects that the on-orbit precision of the test article should be approximately 5 microns. Extensive hardware configuration development information is also provided.
Seidner, Douglas L; Fujioka, Ken; Boullata, Joseph I; Iyer, Kishore; Lee, Hak-Myung; Ziegler, Thomas R
2018-05-15
Patients with intestinal failure associated with short bowel syndrome (SBS-IF) require parenteral support (PS) to maintain fluid balance or nutrition. Teduglutide (TED) reduced PS requirements in patients with SBS-IF in the randomized, placebo (PBO)-controlled STEPS study (NCT00798967) and its 2-year, open-label extension, STEPS-2 (NCT00930644). STEPS-3 (NCT01560403), a 1-year, open-label extension study in patients with SBS-IF who completed STEPS-2, further monitored the safety and efficacy of TED (0.05 mg/kg/day). Baseline was the start of TED treatment, in either STEPS or STEPS-2. At the end of STEPS-3, patients treated with TED in both STEPS and STEPS-2 (TED-TED) received TED for ≤42 months, and patients treated with TED only in STEPS-2 (no TED treatment [NT]/PBO-TED) received TED for ≤36 months. Fourteen patients enrolled (TED-TED, n = 5; NT/PBO-TED, n = 9) and 13 completed STEPS-3. At the last dosing visit, mean (SD) PS was reduced from baseline by 9.8 (14.4 [50%]) and 3.9 (2.8 [48%]) L/week in TED-TED and NT/PBO-TED, respectively. Mean (SD) PS infusions decreased by 3.0 (4.6) and 2.1 (2.2) days per week from baseline in TED-TED and NT/PBO-TED, respectively. Two patients achieved PS independence; 2 additional patients who achieved independence in STEPS-2 maintained enteral autonomy throughout STEPS-3. All patients reported ≥1 treatment-emergent adverse event (TEAE); 3 patients had TEAEs that were reported as treatment related. No patient had a treatment-related treatment-emergent serious AE. Long-term TED treatment yielded a safety profile consistent with previous studies, sustained efficacy, and a further decline in PS requirements. © 2018 The Authors. Nutrition in Clinical Practice published by Wiley Periodicals, Inc. on behalf of American Society for Parenteral and Enteral Nutrition.
Mechanical Extension Implants for Short-Bowel Syndrome.
Luntz, Jonathan; Brei, Diann; Teitelbaum, Daniel; Spencer, Ariel
2006-01-01
Short-bowel syndrome (SBS) is a rare, potentially lethal medical condition where the small intestine is far shorter than required for proper nutrient absorption. Current treatment, including nutritional, hormone-based, and surgical modification, have limited success resulting in 30% to 50% mortality rates. Recent advances in mechanotransduction, stressing the bowel to induce growth, show great promise; but for successful clinical use, more sophisticated devices that can be implanted are required. This paper presents two novel devices that are capable of the long-term gentle stressing. A prototype of each device was designed to fit inside a short section of bowel and slowly extend, allowing the bowel section to grow approximately double its initial length. The first device achieves this through a dual concentric hydraulic piston that generated almost 2-fold growth of a pig small intestine. For a fully implantable extender, a second device was developed based upon a shape memory alloy actuated linear ratchet. The proof-of-concept prototype demonstrated significant force generation and almost double extension when tested on the benchtop and inside an ex-vivo section of pig bowel. This work provides the first steps in the development of an implantable extender for treatment of SBS.
Mechanical Extension Implants for Short-Bowel Syndrome
Luntz, Jonathan; Brei, Diann; Teitelbaum, Daniel; Spencer, Ariel
2007-01-01
Short-bowel syndrome (SBS) is a rare, potentially lethal medical condition where the small intestine is far shorter than required for proper nutrient absorption. Current treatment, including nutritional, hormone-based, and surgical modification, have limited success resulting in 30% to 50% mortality rates. Recent advances in mechanotransduction, stressing the bowel to induce growth, show great promise; but for successful clinical use, more sophisticated devices that can be implanted are required. This paper presents two novel devices that are capable of the long-term gentle stressing. A prototype of each device was designed to fit inside a short section of bowel and slowly extend, allowing the bowel section to grow approximately double its initial length. The first device achieves this through a dual concentric hydraulic piston that generated almost 2-fold growth of a pig small intestine. For a fully implantable extender, a second device was developed based upon a shape memory alloy actuated linear ratchet. The proof-of-concept prototype demonstrated significant force generation and almost double extension when tested on the benchtop and inside an ex-vivo section of pig bowel. This work provides the first steps in the development of an implantable extender for treatment of SBS. PMID:17369875
Assessment of automatic ligand building in ARP/wARP.
Evrard, Guillaume X; Langer, Gerrit G; Perrakis, Anastassis; Lamzin, Victor S
2007-01-01
The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein-ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
Proximal Interphalangeal Joint Extension Block Splint
Abboudi, Jack; Jones, Christopher M.
2016-01-01
Background: Extension block splinting of the proximal interphalangeal (PIP) joint is a simple and useful treatment option although the practical application of this technique has remained undefined in the literature. Methods: This article provides a step-by-step technique for the construction of a reliable PIP extension block splint and also reviews basic indications for treatment with a PIP extension block splint as well as other PIP extension block splint designs. Results: The proposed splint design outlined in this article is reliable, easy to reproduce and easy for patients to manage when treated with a PIP extension block splint. Conclusions: PIP extension block splinting has a role for certain injuries and certain post-operative protocols. A reliable splint design that is easy to manage makes this treatment choice more attractive to the surgeon and the patient. PMID:27390555
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mandour Eldeeb, Mohamed
The backward facing steps nozzle (BFSN) is a new developed flow adjustable exit area nozzle. It consists of two parts, the first is a base nozzle with small area ratio and the second part is a nozzle extension with surface consists of backward facing steps. The steps number and heights are carefully chosen to produce controlled flow separation at steps edges that adjust the nozzle exit area at all altitudes (pressure ratios). The BFSN performance parameters are assessed numerically in terms of thrust and side loads against the dual-bell nozzle with the same pressure ratios and cross sectional areas. Cold flow inside the planar BFSN and planar DBN are simulated using three-dimensional turbulent Navier-Stoke equations solver at different pressure ratios. The pressure distribution over the upper and the lower nozzles walls show symmetrical flow separation location inside the BFSN and an asymmetrical flow separation location inside the DBN at same vertical plane. The side loads are calculated by integrate the pressure over the nozzles walls at different pressure ratios for both nozzles. Time dependent solution for the DBN and the BFSN are obtained by solving two-dimensional turbulent flow. The side loads over the upper and lower nozzles walls are plotted against the flow time. The BFSN side loads history shows a small values of fluctuated side loads compared with the DBN which shows a high values with high fluctuations. Hot flow 3-D numerical solutions inside the axi-symmetric BFSN and DBN are obtained at different pressure ratios and compared to assess the BFSN performance against the DBN. Pressure distributions over the nozzles walls at different circumferential angels are plotted for both nozzles. The results show that the flow separation location is axi-symmetric inside the BFSN with symmetrical pressure distributions over the nozzle circumference at different pressure ratios. While the DBN results show an asymmetrical flow separation locations over the nozzle circumference at all pressure ratios.The results show that the side loads in the BFSN is 0.01%-0.6% of its value in the DBN for same pressure ratio. For further confirmation of the axi-symmetric nature of the flow in the BFSN, 2-D axi-symmetric solutions are obtained at same pressure ratios and boundary conditions. The flow parameters at the nozzle exit are calculated the 3-D and the 2-D solutions and compared to each other. The maximum difference between the 3-D and the 2-D solutions is less than 1%. Parametric studies are carried out with number of the backward facing steps varied from two to forty. The results show that as the number of backward facing steps increase, the nozzle performance in terms of thrust approach the DBN performance. The BFSN with two and six steps are simulated for pressure ratios range from 148 to 1500 and compared with the DBN and a conventional bell nozzle. Expandable BFSN study is carried out on the BFSN with two steps where the nozzle operation is divided into three modes related to the operating altitude (PR). Backward facing steps concept is applied to a full scale conventional bell nozzle by adding two backward facing steps at the end of the nozzle increasing its expansion area results in 1.8% increasing in its performance in terms of thrust coefficient at high altitudes.
Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M
2015-03-01
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
A modal analysis of lamellar diffraction gratings in conical mountings
NASA Technical Reports Server (NTRS)
Li, Lifeng
1992-01-01
A rigorous modal analysis of lamellar grating, i.e., gratings having rectangular grooves, in conical mountings is presented. It is an extension of the analysis of Botten et al. which considered non-conical mountings. A key step in the extension is a decomposition of the electromagnetic field in the grating region into two orthogonal components. A computer program implementing this extended modal analysis is capable of dealing with plane wave diffraction by dielectric and metallic gratings with deep grooves, at arbitrary angles of incidence, and having arbitrary incident polarizations. Some numerical examples are included.
NASA Astrophysics Data System (ADS)
Gray, Harry B.; Winkler, Jay R.; Kozak, John J.
2011-03-01
A geometrical model has been developed to describe the early stages of unfolding of cytochromes c‧ and c-b562 . Calculations are based on a step-wise extension of the polypeptide chain subject to the constraint that the spatial relationship among the residues of each triplet is fixed by the native-state crystallographic data. The response of each protein to these structural perturbations allows the evolution of each of the four helices in these two proteins to be differentiated. It is found that the two external helices in c‧ unfold before its two internal helices, whereas exactly the opposite behaviour is demonstrated by c-b562 . Each of these cytochromes has an extended, internal, non-helical ('turning') region that initially lags behind the most labile helix but then, at a certain stage (identified for each cytochrome), unravels before any of the four helices present in the native structure. It is believed that these predictions will be useful in guiding future experimental studies on the unfolding of these two cytochromes.
Jana, Subrata; Samal, Prasanjit
2017-06-29
Semilocal density functionals for the exchange-correlation energy of electrons are extensively used as they produce realistic and accurate results for finite and extended systems. The choice of techniques plays a crucial role in constructing such functionals of improved accuracy and efficiency. An accurate and efficient semilocal exchange energy functional in two dimensions is constructed by making use of the corresponding hole which is derived based on the density matrix expansion. The exchange hole involved is localized under the generalized coordinate transformation and satisfies all the relevant constraints. Comprehensive testing and excellent performance of the functional is demonstrated versus exact exchange results. The accuracy of results obtained by using the newly constructed functional is quite remarkable as it substantially reduces the errors present in the local and nonempirical exchange functionals proposed so far for two-dimensional quantum systems. The underlying principles involved in the functional construction are physically appealing and hold promise for developing range separated and nonlocal exchange functionals in two dimensions.
St-Onge, N; Duval, N; Yahia, L'H; Feldman, A G
2004-05-01
Previous studies of movement kinematics in patients with a ruptured anterior cruciate ligament (ACL) have focused on changes in angular displacement in a single joint, usually flexion/extension of the knee. In the present study, we investigated the effect of an ACL injury on the overall limb interjoint coordination. We asked healthy and chronic ACL-deficient male subjects to perform eight types of movements: forward squats, backward squats, sideways squats, squats on one leg, going up a step, going down a step, walking three steps, and stepping in place. Depending on the movement concerned, we applied principal component (PC) analysis to 3 or 4 degrees of freedom (DFs): thigh flexion/extension, knee flexion/extension, ankle flexion/extension, thigh abduction/adduction. The first three DFs were investigated in all movements. PC analysis identifies linear combinations of DFs. Movements with a fixed ratio between DFs are thus described by only one PC or synergy. PCs were computed for the entire movement as well as for the period of time when the foot was in contact with the ground. For both the control and the injured groups, two synergies (PC vectors) usually accounted for more than 95% of the DFs' angular excursions. It was possible to describe 95-99% of some movements using only one synergy. Compared to control subjects, injured subjects employed different synergies for going up a step, walking three steps, squatting sideways, and squatting forward, both in the injured and uninjured legs. Those movements may thus be more indicative of injury than other movements. Although ACL-deficiency did not increase asymmetry (angle between the PCs of the same movement performed on the right and the left sides), this result is not conclusive because of the comparatively low number of subjects who participated in the study. However, the finding that synergies in both legs of patients were different from those in control subjects for going up a step and walking three steps suggests that interjoint coordination was affected for both legs, so that the asymmetry index might have been preserved despite the injury. There was also a relationship between the asymmetry index for squatting on one leg, squatting forward, walking three steps and some of the outcomes of the knee injury and osteoarthritis outcome score (pain, symptoms, activities of daily living, sport and recreation function, and knee-related quality of life). This suggests that significant differences in the asymmetry index could be obtained if more severely-injured patients participated in this study. It is possible that subjects compensated for their mechanical deficiencies by modifying muscle activation patterns. Synergies were not only modified in injured subjects, but also rearranged: the percentage of movement explained by the first PC was different for the injured and/or uninjured legs of patients, as compared to the legs of the control group, for going up a step, going down a step, walking three steps, and squatting forward. We concluded that the analysis of interjoint coordination may be efficient in characterizing motor deficits in people with knee injuries.
Novel knee joint mechanism of transfemoral prosthesis for stair ascent.
Inoue, Koh; Wada, Takahiro; Harada, Ryuchi; Tachiwana, Shinichi
2013-06-01
The stability of a transfemoral prosthesis when walking on flat ground has been established by recent advances in knee joint mechanisms and their control methods. It is, however, difficult for users of a transfemoral prosthesis to ascend stairs. This difficulty is mainly due to insufficient generation of extension moment around the knee joint of the prosthesis to lift the body to the next step on the staircase and prevent any unexpected flexion of the knee joint in the stance phase. Only a prosthesis with an actuator has facilitated stair ascent using a step-over-step gait (1 foot is placed per step). However, its use has issues associated with the durability, cost, maintenance, and usage environment. Therefore, the purpose of this research is to develop a novel knee joint mechanism for a prosthesis that generates an extension moment around the knee joint in the stance phase during stair ascent, without the use of any actuators. The proposed mechanism is based on the knowledge that the ground reaction force increases during the stance phase when the knee flexion occurs. Stair ascent experiments with the prosthesis showed that the proposed prosthesis can realize stair ascent without any undesirable knee flexion. In addition, the prosthesis is able to generate a positive knee joint moment power in the stance phase even without any power source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito, R.M.; Nozik, A.J.
1985-07-18
A kinetic model was developed to describe the effects of light intensity on the photocorrosion of n-type semiconductor electrodes. The model is an extension of previous work by Gomes and co-workers that includes the possibility of multiple steps for the oxidation reaction of the reducing agent in the electrolyte. Six cases are considered where the semiconductor decomposition reaction is multistep (each step involves a hole); the oxidation reaction of the reducing agent is multistep (each step after the first involves a hole or a chemical intermediate), and the first steps of the competing oxidation reactions are reversible or irreversible. Itmore » was found, contrary to previous results, that the photostability of semiconductor electrodes could increase with increased light intensity if the desired oxidation reaction of the reducing agent in the electrolyte was multistep with the first step being reversible. 14 references, 5 figures, 1 table.« less
Teaching Design in Television Production Technology: The Twelve Steps of Preproduction
ERIC Educational Resources Information Center
Harrison, Henry L. (Hal), III; Loveland, Thomas
2009-01-01
Extensive planning must be used to produce television programs. Students must develop sound design practices and understand these attributes of design in their production planning. Through the design and planning processes involved in television production, students learn that design is a creative process, and that there is no perfect design, but…
Have It Their Way: Creating Personalized Online Challenges to Motivate Learners
ERIC Educational Resources Information Center
O'Neill, Barbara; Ensle, Karen
2012-01-01
This article describes the development and evaluation of an online health and personal finance behavior change challenge that can be adapted for other Extension subject matter areas. The Small Steps to Health and Wealth™ Challenge is a behaviorally focused activity in which registered users self-report health and financial behaviors and receive…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... for developing information regarding the causes and prevention of occupational injuries, illnesses...' risk of death or serious injury by ensuring that manlifts are in safe operating condition. Periodic...; and any ``skip'' on the up or down run when mounting a step (indicating worn gears). A certification...
NASA Technical Reports Server (NTRS)
Beaton, Kara H.; Chappell, Steven P.; Bekdash, Omar S.; Gernhardt, Michael L.
2018-01-01
The NASA Next Space Technologies for Exploration Partnerships (NextSTEP) program is a public-private partnership model that seeks commercial development of deep space exploration capabilities to support extensive human spaceflight missions around and beyond cislunar space. NASA first issued the Phase 1 NextSTEP Broad Agency Announcement to U.S. industries in 2014, which called for innovative cislunar habitation concepts that leveraged commercialization plans for low Earth orbit. These habitats will be part of the Deep Space Gateway (DSG), the cislunar space station planned by NASA for construction in the 2020s. In 2016, Phase 2 of the NextSTEP program selected five commercial partners to develop ground prototypes. A team of NASA research engineers and subject matter experts have been tasked with developing the ground test protocol that will serve as the primary means by which these Phase 2 prototype habitats will be evaluated. Since 2008, this core test team has successfully conducted multiple spaceflight analog mission evaluations utilizing a consistent set of operational products, tools, methods, and metrics to enable the iterative development, testing, analysis, and validation of evolving exploration architectures, operations concepts, and vehicle designs. The purpose of implementing a similar evaluation process for the NextSTEP Phase 2 Habitation Concepts is to consistently evaluate the different commercial partner ground prototypes to provide data-driven, actionable recommendations for Phase 3.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Biophotonics Master studies: teaching and training experience at University of Latvia
NASA Astrophysics Data System (ADS)
Spigulis, Janis
2007-06-01
Two-year program for Master's studies on Biophotonics (Biomedical Optics) has been originally developed and carried out at University of Latvia since 1995. The Curriculum contains basic subjects like Fundamentals of Biomedical Optics, Medical Lightguides, Anatomy and Physiology, Lasers and Non-coherent Light Sources, Basic Physics, etc. Student laboratories, special English Terminology and Laboratory-Clinical Praxis are also involved as the training components, and Master project is the final step for the degree award. Life-long learning is supported by several E-courses and an extensive short course for medical laser users "Lasers and Bio-optics in Medicine". Recently a new inter-university European Social Fund project was started to adapt the program accordingly to the Bologna Declaration guidelines.
An Evaluation of a Management Wargame and the Factors Affecting Game Performance.
1987-09-01
in residence. This is not a criticism of the author, but rather a systematic flaw in game development in general. Therefore, TEMPO-AI is an excellent...establish the test procedure used in this thesis. This stage of game development is absolutely vital, if the game is intended for serious academic use...Unfortunately, this important step is sadly neglected in nearly all military game development . While TEMPO-AI was extensively debugged as a computer
1988-12-14
situation in the world healthier, particularly for the program to liquidate nuclear arms and other types of weapons of mass destruction. During the...make preparations for extensive discussions with the aim of radically reducing tactical nuclear weapons, armed forces, and conventional weap- ons...liquidat- ing two classes of nuclear arms as a historic step which will create preconditions for limiting the feverish arms race and for better
NASA Technical Reports Server (NTRS)
Moitra, A.
1982-01-01
An implicit finite-difference algorithm is developed for the numerical solution of the incompressible three dimensional Navier-Stokes equations in the non-conservative primitive-variable formulation. The flow field about an airfoil spanning a wind-tunnel is computed. The coordinate system is generated by an extension of the two dimensional body-fitted coordinate generation techniques of Thompson, as well as that of Sorenson, into three dimensions. Two dimensional grids are stacked along a spanwise coordinate defined by a simple analytical function. A Poisson pressure equation for advancing the pressure in time is arrived at by performing a divergence operation on the momentum equations. The pressure at each time-step is calculated on the assumption that continuity be unconditionally satisfied. An eddy viscosity coefficient, computed according to the algebraic turbulence formulation of Baldwin and Lomax, simulates the effects of turbulence.
Extension materials for meat-borne parasitic diseases in developing countries.
Rimm, Mogens
2003-06-01
In support of a project on porcine cysticercosis in Tanzania, an educational video was prepared to inform the rural communities on the health risks and prevention of the parasitic disease. This paper describes the process involved in making the video, especially the importance of establishing a good understanding between veterinary public health officials and the video producer. Important steps in the process include determining the target audience, the film's core message, the construction of the "story", script development, the filming and editing activities, and, importantly, the development of strategies for production and use of the film as extension material. Suggestions on logistical and technical aspects of filming and viewing are also discussed. The experience gained in Tanzania will be of value to others planning similar projects elsewhere.
Riber-Hansen, Rikke; Hastrup, Nina; Clemmensen, Ole; Behrendt, Nille; Klausen, Siri; Ramsing, Mette; Spaun, Eva; Hamilton-Dutoit, Stephen Jacques; Steiniche, Torben
2012-02-01
Metastasis size in melanoma sentinel lymph nodes (SLNs) is an emerging prognostic factor. Two European melanoma treatment trials include SLN metastasis diameters as inclusion criteria. Whilst diameter estimates are sensitive to the number of sections examined, the level of this bias is largely unknown. We performed a prospective multicentre study to compare the European Organisation for Research and Treatment of Cancer (EORTC) recommended protocol with a protocol of complete step-sectioning. One hundred and thirty-three consecutive SLNs from seven SLN centres were analysed by five central sections 50μm apart (EORTC Protocol) followed by complete 250μm step-sectioning. Overall, 29 patients (21.8%) were SLN-positive. The EORTC Protocol missed eight of these metastases (28%), one metastasis measuring less than 0.1mm in diameter, seven measuring between 0.1 and 1mm. Complete step-sectioning at 250μm intervals (Extensive Protocol) missed one metastasis (3%) that measured less than 0.1mm. Thirteen treatment courses (34%) performed if inclusion was based on the Combined Protocol would not be performed if assessed by the EORTC Protocol. Thus, 10 patients would be without completion lymph node dissection (EORTC MINITUB study), whilst three patients would not be eligible for anti-CTLA4 trial (EORTC protocol 18071). The corresponding number with the Extensive Protocol would be three; one patient for the MINITUB registration study and two patients for the anti-CTLA4 study. Examining SLNs by close central sectioning alone (EORTC Protocol) misses a substantial number of metastases and underestimates the maximum metastasis diameter, leading to important changes in patient eligibility for various treatment protocols. Copyright © 2011 Elsevier Ltd. All rights reserved.
Photovoltaic central station step and touch potential considerations in grounding system design
NASA Technical Reports Server (NTRS)
Engmann, G.
1983-01-01
The probability of hazardous step and touch potentials is an important consideration in central station grounding system design. Steam turbine generating station grounding system design is based on accepted industry practices and there is extensive in-service experience with these grounding systems. A photovoltaic (PV) central station is a relatively new concept and there is limited experience with PV station grounding systems. The operation and physical configuration of a PV central station is very different from a steam electric station. A PV station bears some similarity to a substation and the PV station step and touch potentials might be addressed as they are in substation design. However, the PV central station is a generating station and it is appropriate to examine the effect that the differences and similarities of the two types of generating stations have on step and touch potential considerations.
Vorstius, Christian; Radach, Ralph; Lang, Alan R
2012-02-01
Reflexive and voluntary levels of processing have been studied extensively with respect to possible impairments due to alcohol intoxication. This study examined alcohol effects at the 'automated' level of processing essential to many complex visual processing tasks (e.g., reading, visual search) that involve ongoing modifications or reprogramming of well-practiced routines. Data from 30 participants (16 male) were collected in two counterbalanced sessions (alcohol vs. no-alcohol control; mean breath alcohol concentration = 68 mg/dL vs. 0 mg/dL). Eye movements were recorded during a double-step task where 75% of trials involved two target stimuli in rapid succession (inter-stimulus interval [ISI]=40, 70, or 100 ms) so that they could elicit two distinct saccades or eye movements (double steps). On 25% of trials a single target appeared. Results indicated that saccade latencies were longer under alcohol. In addition, the proportion of single-step responses and the mean saccade amplitude (length) of primary saccades decreased significantly with increasing ISI. The key novel finding, however, was that the reprogramming time needed to cancel the first saccade and adjust saccade amplitude was extended significantly by alcohol. The additional time made available by prolonged latencies due to alcohol was not utilized by the saccade programming system to decrease the number of two-step responses. These results represent the first demonstration of specific alcohol-induced programming deficits at the automated level of oculomotor processing.
Gobezayehu, Abebe Gebremariam; Mohammed, Hajira; Dynes, Michelle M; Desta, Binyam Fekadu; Barry, Danika; Aklilu, Yeshiwork; Tessema, Hanna; Tadesse, Lelissie; Mikulich, Meridith; Buffington, Sandra Tebben; Sibley, Lynn M
2014-01-01
We examined the degree to which the skills and knowledge of health workers in Ethiopia were retained 18 months after initial maternal and newborn health training and sought to identify factors associated with 18-month skills assessment performance. A nonexperimental, descriptive design was employed to assess 18-month skills performance on the topics of Prevent Problems Before Baby Is Born and Prevent Problems After Baby Is Born. Assessment was conducted by project personnel who also received the maternal and newborn health training and additional training to reliably assess health worker performance. Among the 732 health workers who participated in maternal and newborn health training in 6 rural districts of the Amhara and Oromia regions of Ethiopia (including pretesting before training and a posttraining posttest), 75 health extension workers (78%) and 234 guide team members (37%) participated in 18-month posttest. Among health extension workers in both regions, strong knowledge retention was noted in 10 of 14 care steps for Prevent Problems Before Baby Is Born and in 14 of 16 care steps of Prevent Problems After Baby Is Born. Lower knowledge retention was observed among guide team members in the Amhara region. Across regions, health workers scored lowest on steps that involved nonaction (eg, do not give oxytocin). Educational attainment and age were among the few variables found to significantly predict test performance, although participants varied substantially by other sociodemographic characteristics. Results demonstrated an overall strong retention of knowledge and skills among health extension workers and highlighted the need for improvement among some guide team members. Refresher training and development of strategies to improve knowledge of retention of low-performing steps were recommended. © 2014 by the American College of Nurse-Midwives.
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
EXTENSION IN RURAL COMMUNITIES, A MANUAL FOR AGRICULTURAL AND HOME EXTENSION WORKERS.
ERIC Educational Resources Information Center
SAVILE, A.H.
A PRACTICAL GUIDE IS PROVIDED FOR TRAINERS OF ADVISORY AND EXTENSION WORKERS AND LOCAL LEADERS IN AGRICULTURE AND COMMUNITY DEVELOPMENT IN DEVELOPING NATIONS. BASIC PRINCIPLES OF AGRICULTURAL EXTENSION, COMMUNITY SURVEY PROCEDURES, ELEMENTS OF PROGRAM PLANNING, AND PURPOSES AND METHODS OF PROGRAM EVALUATION ARE DESCRIBED. THEN FOLLOW TWO CHAPTERS…
Seismic vulnerability assessment to earthquake at urban scale: A case of Mostaganem city in Algeria
Benanane, Abdelkader; Boutaraa, Zohra
2018-01-01
The focus of this study was the seismic vulnerability assessment of buildings constituting Mostaganem city in Algeria. Situated 320 km to the west of Algiers, Mostaganem city encompasses a valuable cultural and architectural built heritage. The city has suffered several moderate earthquakes in recent years; this has led to extensive structural damage to old structures, especially unreinforced historical buildings. This study was divided into two essential steps, the first step being to establish fragility curves based on a non-linear static pushover analysis for each typology and height of buildings. Twenty-seven pushover analyses were performed by means of SAP2000 software (three analyses for each type of building). The second step was to adopt the US HAZUS software and to modify it to suit the typical setting and parameters of the city of Mostaganem. A seismic vulnerability analysis of Mostaganem city was conducted using HAZUS software after inputting the new parameters of the fragility curves established within the first step. The results indicated that the number of poor-quality buildings expected to be totally destroyed under a 5.5 Mw earthquake scenario could reach more than 28 buildings. Three percent of unreinforced masonry (URM) buildings were completely damaged and 10% were extensively damaged. Of the concrete frame buildings, 6% were extensively damaged and 19% were moderately damaged. According to the built year, 6% of both concrete frame and URM buildings built before 1980 are estimated to be collapsing. Buildings constructed between 1980 and 1999 are more resistant; 8% of those structures were extensively damaged and 18% were moderately damaged. Only 10% of buildings constructed after 1999 were moderately damaged. The results also show that the main hospital of the city, built before 1960, will be extensively damaged during an earthquake of 5.5 Mw. The number of human casualties could reach several hundreds – 10.5% of residents of URM buildings are injured or dead. Compared with the URM buildings, concrete frame buildings have lower casualty rates of 1.5% and 0.5% for those built before and after 1980, respectively. It was concluded that Mostaganem city belongs to seismic vulnerable zones in Algeria; in this regard, an action plan is needed for the rehabilitation of old constructions. In addition, the effectiveness of establishing and introducing new and appropriate fragility curves was demonstrated.
Numerical, analytical, experimental study of fluid dynamic forces in seals
NASA Technical Reports Server (NTRS)
Shapiro, William; Artiles, Antonio; Aggarwal, Bharat; Walowit, Jed; Athavale, Mahesh M.; Preskwas, Andrzej J.
1992-01-01
NASA/Lewis Research Center is sponsoring a program for providing computer codes for analyzing and designing turbomachinery seals for future aerospace and engine systems. The program is made up of three principal components: (1) the development of advanced three dimensional (3-D) computational fluid dynamics codes, (2) the production of simpler two dimensional (2-D) industrial codes, and (3) the development of a knowledge based system (KBS) that contains an expert system to assist in seal selection and design. The first task has been to concentrate on cylindrical geometries with straight, tapered, and stepped bores. Improvements have been made by adoption of a colocated grid formulation, incorporation of higher order, time accurate schemes for transient analysis and high order discretization schemes for spatial derivatives. This report describes the mathematical formulations and presents a variety of 2-D results, including labyrinth and brush seal flows. Extensions of 3-D are presently in progress.
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Multiple intensity distributions from a single optical element
NASA Astrophysics Data System (ADS)
Berens, Michael; Bruneton, Adrien; Bäuerle, Axel; Traub, Martin; Wester, Rolf; Stollenwerk, Jochen; Loosen, Peter
2013-09-01
We report on an extension of the previously published two-step freeform optics tailoring algorithm using a Monge-Kantorovich mass transportation framework. The algorithm's ability to design multiple freeform surfaces allows for the inclusion of multiple distinct light paths and hence the implementation of multiple lighting functions in a single optical element. We demonstrate the procedure in the context of automotive lighting, in which a fog lamp and a daytime running lamp are integrated in a single optical element illuminated by two distinct groups of LEDs.
ERIC Educational Resources Information Center
Yates, Dan; Ward, Chris
2014-01-01
This study represents an extension of longitudinal studies regarding personal financial literacy. Graduating college students must have a financial plan in place as they enter the workforce along with a "game plan" on how to attack their college debt. A college personal finance course can help each student develop their personalized…
5 Steps to Food Preservation Program Meets the Needs of Idaho Families
ERIC Educational Resources Information Center
Dye, Lorie; Hoffman, Katie
2014-01-01
University of Idaho FCS Extension Educators in southeastern Idaho developed a five-lesson condensed version of safe food preservation classes, driven by participants' interest to meet the needs of everyday home preservers. A post-test survey revealed that participants took the course to be self-reliant, use their own produce, and be in control of…
Voevodin, A F; Lapin, B A; Agrba, V Z; Timanovskaya, V V
1978-01-01
A new technique (indirect double immunodiffusion) for detection of EBV-associated soluble antigen and corresponding antibodies has been developed. This technique includes three steps: 1) simple double immunodiffusion with extracts of Raji cells (or other EBV-genome positive cells) and human sera containing antibodies against EBV-associated soluble antigen; 2) extensive washing and treatment with anti-human globulin; 3) extensive washing and treatment with tannic acid. Using this test it was shown that the soluble antigen indistinguishable from EBV-associated soluble antigen was present in KMPG-1 cells producing HVP.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Integration of living cells into nanostructures using non-conventional self-assembly
NASA Astrophysics Data System (ADS)
Carnes, Eric C.
Patternable cell immobilization is an essential feature of any solid-state device designed for interrogating or exploiting living cells. Immobilized cells must remain viable in a robust matrix that promotes fluidic connectivity between the cells and their environment while retaining the ability to establish and maintain necessary chemical gradients. A suitable inorganic matrix can be constructed via evaporation-induced self-assembly of nanostructured silica, in which phospholipids are used in place of traditional surfactant structure-directing agents in order to enhance cell viability and to create a coherent interface between the cell and the surrounding three-dimensional nanostructure. We have used this technique to develop two distinct cell encapsulation processes: cell-directed assembly and cell-directed integration. Cell-directed assembly is a one-step procedure that provides superior viability of immobilized cells by encouraging cells to interact with the developing host matrix. Limitations of this system include low viability for some cell types due to exposure to solvents and stresses, as well as a lack of control over the developing host nanostructure. Cell-directed integration addresses these shortcomings by introducing a two-step process in which cells become encapsulated in a pre-formed silica matrix. The validity of each encapsulation method has been demonstrated with Gram-positive and Gram-negative bacteria, yeast, and mammalian cells. The ability of the immobilized cells to establish relevant gradients of ions or signaling molecules, a key feature of these systems, has been characterized. Additionally, extension of cell encapsulation to address lingering questions in cell biology is addressed. We have also adapted these immobilization processes to be compatible with a variety of patterning strategies having tailorable properties. Widely available photolithography techniques, as well as direct aerosol deposition, have been adapted to provide methods for obtaining both positive and negative transfer of desired cell patterns. Multi-step lithography is also used to create a highly functional system allowing spatial control of not only cells but also media and other molecules of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duwel, A.E.; Watanabe, S.; Trias, E.
1997-11-01
New resonance steps are found in the experimental current-voltage characteristics of long, discrete, one-dimensional Josephson junction arrays with open boundaries and in an external magnetic field. The junctions are underdamped, connected in parallel, and dc biased. Numerical simulations based on the discrete sine-Gordon model are carried out, and show that the solutions on the steps are periodic trains of fluxons, phase locked by a finite amplitude radiation. Power spectra of the voltages consist of a small number of harmonic peaks, which may be exploited for possible oscillator applications. The steps form a family that can be numbered by the harmonicmore » content of the radiation, the first member corresponding to the Eck step. Discreteness of the arrays is shown to be essential for appearance of the higher order steps. We use a multimode extension of the harmonic balance analysis, and estimate the resonance frequencies, the ac voltage amplitudes, and the theoretical limit on the output power on the first two steps. {copyright} {ital 1997 American Institute of Physics.}« less
Lange, Bernd Markus; Rios-Estepa, Rigoberto
2014-01-01
The integration of mathematical modeling with analytical experimentation in an iterative fashion is a powerful approach to advance our understanding of the architecture and regulation of metabolic networks. Ultimately, such knowledge is highly valuable to support efforts aimed at modulating flux through target pathways by molecular breeding and/or metabolic engineering. In this article we describe a kinetic mathematical model of peppermint essential oil biosynthesis, a pathway that has been studied extensively for more than two decades. Modeling assumptions and approximations are described in detail. We provide step-by-step instructions on how to run simulations of dynamic changes in pathway metabolites concentrations.
Demura, Tomohiro; Demura, Shin-ichi; Uchiyama, Masanobu; Sugiura, Hiroki
2014-01-01
Gait properties change with age because of a decrease in lower limb strength and visual acuity or knee joint disorders. Gait changes commonly result from these combined factors. This study aimed to examine the effects of knee extension strength, visual acuity, and knee joint pain on gait properties of for 181 healthy female older adults (age: 76.1 (5.7) years). Walking speed, cadence, stance time, swing time, double support time, step length, step width, walking angle, and toe angle were selected as gait parameters. Knee extension strength was measured by isometric dynamometry; and decreased visual acuity and knee joint pain were evaluated by subjective judgment whether or not such factors created a hindrance during walking. Among older adults without vision problems and knee joint pain that affected walking, those with superior knee extension strength had significantly greater walking speed and step length than those with inferior knee extension strength (P < .05). Persons with visual acuity problems had higher cadence and shorter stance time. In addition, persons with pain in both knees showed slower walking speed and longer stance time and double support time. A decrease of knee extension strength and visual acuity and knee joint pain are factors affecting gait in the female older adults. Decreased knee extension strength and knee joint pain mainly affect respective distance and time parameters of the gait.
Kageyama, Hakuto; Tanaka, Yoshito; Takabe, Teruhiro
2018-06-01
Betaine (trimethylglycine) is an important compatible solute that accumulates in response to abiotic stresses such as drought and salinity. Biosynthetic pathways of betaine have been extensively studied, but it remains to be clarified on algae. A diatom Thalassiosira pseudonana CCMP1335 is an important component of marine ecosystems. Here we show that the genome sequence of Thalassiosira suggests the presence of two biosynthetic pathways for betaine, via three step methylation of glycine and via two step oxidation of choline. The choline oxidation via choline dehydrogenase was suggested and its sequential characteristics were analyzed. A candidate gene TpORF1 for glycine methylation encodes a protein consisted of 574 amino acids with two putative tandem repeat methyltransferase domains. The TpORF1 was expressed in E. coli, and the purified protein was shown to synthesize betaine via three step methylation of glycine and designated as TpGSDMT. The proteins containing C-terminal half or N-terminal half were expressed in E. coli and exhibited the methyl transferase activities with different substrate specificity for glycine, sarcosine and dimethylglycine. Upregulation of TpGSDMT transcription and betaine levels were observed at high salinity, suggesting the importance of TpGSDMT for salt tolerance in T. pseudonana cells. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Development of an e-Learning Program for Extensive Reading
ERIC Educational Resources Information Center
Okazaki, Hironobu; Hashimoto, Shinichi; Fukuda, Eri; Nitta, Haruhiko; Kido, Kazuhiko
2012-01-01
As extensive reading becomes more commonplace in the EFL/ESL classroom, there is a rise in the number of instructors and administrators who are looking for cost-effective and space-saving methods to carry out extensive reading activities. Two extensive reading systems to respond to such concerns were developed with the support of a Grant-in-Aid…
The Core Avionics System for the DLR Compact-Satellite Series
NASA Astrophysics Data System (ADS)
Montenegro, S.; Dittrich, L.
2008-08-01
The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.
Regression Analysis of a Disease Onset Distribution Using Diagnosis Data
Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.
2008-01-01
Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
STEPS: Moving from Welfare to Work.
ERIC Educational Resources Information Center
Vail, Ann; Cummings, Merrilyn; Kratzer, Connie; Galindo, Vickie
Cooperative extension service faculty at New Mexico State University started the Steps to Employment and Personal Success (STEPS) program to help Temporary Assistance for Needy Families (TANF) clients qualify for and maintain full-time employment and strengthen their families for long-term success. Clients are referred to STEPS by New Mexico…
Two-Step Plasma Process for Cleaning Indium Bonding Bumps
NASA Technical Reports Server (NTRS)
Greer, Harold F.; Vasquez, Richard P.; Jones, Todd J.; Hoenk, Michael E.; Dickie, Matthew R.; Nikzad, Shouleh
2009-01-01
A two-step plasma process has been developed as a means of removing surface oxide layers from indium bumps used in flip-chip hybridization (bump bonding) of integrated circuits. The two-step plasma process makes it possible to remove surface indium oxide, without incurring the adverse effects of the acid etching process.
The Athena Astrophysical MHD Code in Cylindrical Geometry
NASA Astrophysics Data System (ADS)
Skinner, M. A.; Ostriker, E. C.
2011-10-01
We have developed a method for implementing cylindrical coordinates in the Athena MHD code (Skinner & Ostriker 2010). The extension has been designed to alter the existing Cartesian-coordinates code (Stone et al. 2008) as minimally and transparently as possible. The numerical equations in cylindrical coordinates are formulated to maintain consistency with constrained transport, a central feature of the Athena algorithm, while making use of previously implemented code modules such as the eigensystems and Riemann solvers. Angular-momentum transport, which is critical in astrophysical disk systems dominated by rotation, is treated carefully. We describe modifications for cylindrical coordinates of the higher-order spatial reconstruction and characteristic evolution steps as well as the finite-volume and constrained transport updates. Finally, we have developed a test suite of standard and novel problems in one-, two-, and three-dimensions designed to validate our algorithms and implementation and to be of use to other code developers. The code is suitable for use in a wide variety of astrophysical applications and is freely available for download on the web.
Streamlining Collaborative Planning in Spacecraft Mission Architectures
NASA Technical Reports Server (NTRS)
Misra, Dhariti; Bopf, Michel; Fishman, Mark; Jones, Jeremy; Kerbel, Uri; Pell, Vince
2000-01-01
During the past two decades, the planning and scheduling community has substantially increased the capability and efficiency of individual planning and scheduling systems. Relatively recently, research work to streamline collaboration between planning systems is gaining attention. Spacecraft missions stand to benefit substantially from this work as they require the coordination of multiple planning organizations and planning systems. Up to the present time this coordination has demanded a great deal of human intervention and/or extensive custom software development efforts. This problem will become acute with increased requirements for cross-mission plan coordination and multi -spacecraft mission planning. The Advanced Architectures and Automation Branch of NASA's Goddard Space Flight Center is taking innovative steps to define collaborative planning architectures, and to identify coordinated planning tools for Cross-Mission Campaigns. Prototypes are being developed to validate these architectures and assess the usefulness of the coordination tools by the planning community. This presentation will focus on one such planning coordination too], named Visual Observation Layout Tool (VOLT), which is currently being developed to streamline the coordination between astronomical missions
Hosoya, Yumiko; Tay, Franklin R.; Miyakoshi, Shoichi; Pashley, David H.
2013-01-01
Purpose This study evaluated the quality of the interface of sound and carious primary tooth dentin bonded with two 4-META one-step self-etch adhesives. Methods Twelve sound and twelve carious primary molars were bonded with AQ Bond Plus (AQBP; Sun Medical) or Hybrid Bond (HB; Sun Medical) and restored with Clearfil Protect Liner F (Kuraray Medical Inc.). After 24 hours of water immersion, the teeth were sectioned and polished. Resin-dentin interfaces were measured with a nano-indentation tester and hardness and Young’s modulus were calculated. Data were analyzed using one-way or two-ways ANOVA and Fisher’s PLSD test with α=0.05. Resin-dentin interfaces were also observed with SEM and TEM. Ammoniacal silver nitrate was used as a tracer for TEM observation. Results Hardness and Young’s modulus of the interfacial dentin were significantly lower than the underlying intact dentin except for the carious-AQBP group. However, there was no significant difference of hardness and Young's moduli of the interfacial dentin among all groups. TEM revealed extensive interfacial nanoleakage in sound dentin bonded with either AQBP or HB. For the carious teeth, nanoleakage was absent in the hybrid layers bonded with the two adhesives. However, extensive silver deposits were identified from the subsurface, porous caries-affected dentin. PMID:18795517
Feed forward and feedback control for over-ground locomotion in anaesthetized cats
NASA Astrophysics Data System (ADS)
Mazurek, K. A.; Holinski, B. J.; Everaert, D. G.; Stein, R. B.; Etienne-Cummings, R.; Mushahwar, V. K.
2012-04-01
The biological central pattern generator (CPG) integrates open and closed loop control to produce over-ground walking. The goal of this study was to develop a physiologically based algorithm capable of mimicking the biological system to control multiple joints in the lower extremities for producing over-ground walking. The algorithm used state-based models of the step cycle each of which produced different stimulation patterns. Two configurations were implemented to restore over-ground walking in five adult anaesthetized cats using intramuscular stimulation (IMS) of the main hip, knee and ankle flexor and extensor muscles in the hind limbs. An open loop controller relied only on intrinsic timing while a hybrid-CPG controller added sensory feedback from force plates (representing limb loading), and accelerometers and gyroscopes (representing limb position). Stimulation applied to hind limb muscles caused extension or flexion in the hips, knees and ankles. A total of 113 walking trials were obtained across all experiments. Of these, 74 were successful in which the cats traversed 75% of the 3.5 m over-ground walkway. In these trials, the average peak step length decreased from 24.9 ± 8.4 to 21.8 ± 7.5 (normalized units) and the median number of steps per trial increased from 7 (Q1 = 6, Q3 = 9) to 9 (8, 11) with the hybrid-CPG controller. Moreover, within these trials, the hybrid-CPG controller produced more successful steps (step length ≤ 20 cm ground reaction force ≥ 12.5% body weight) than the open loop controller: 372 of 544 steps (68%) versus 65 of 134 steps (49%), respectively. This supports our previous preliminary findings, and affirms that physiologically based hybrid-CPG approaches produce more successful stepping than open loop controllers. The algorithm provides the foundation for a neural prosthetic controller and a framework to implement more detailed control of locomotion in the future.
Feed forward and feedback control for over-ground locomotion in anaesthetized cats
Mazurek, K A; Holinski, B J; Everaert, D G; Stein, R B; Etienne-Cummings, R; Mushahwar, V K
2012-01-01
The biological central pattern generator (CPG) integrates open and closed loop control to produce over-ground walking. The goal of this study was to develop a physiologically based algorithm capable of mimicking the biological system to control multiple joints in the lower extremities for producing over-ground walking. The algorithm used state-based models of the step cycle each of which produced different stimulation patterns. Two configurations were implemented to restore over-ground walking in five adult anaesthetized cats using intramuscular stimulation (IMS) of the main hip, knee and ankle flexor and extensor muscles in the hind limbs. An open loop controller relied only on intrinsic timing while a hybrid-CPG controller added sensory feedback from force plates (representing limb loading), and accelerometers and gyroscopes (representing limb position). Stimulation applied to hind limb muscles caused extension or flexion in the hips, knees and ankles. A total of 113 walking trials were obtained across all experiments. Of these, 74 were successful in which the cats traversed 75% of the 3.5 m over-ground walkway. In these trials, the average peak step length decreased from 24.9 ± 8.4 to 21.8 ± 7.5 (normalized units) and the median number of steps per trial increased from 7 (Q1=6, Q3 = 9) to 9 (8, 11) with the hybrid-CPG controller. Moreover, these trials, the hybrid-CPG controller produced more successful steps (step length ≤ 20 cm; ground reaction force ≥ 12.5% body weight) than the open loop controller: 372 of 544 steps (68%) versus 65 of 134 steps (49%), respectively. This supports our previous preliminary findings, and affirms that physiologically based hybrid-CPG approaches produce more successful stepping than open loop controllers. The algorithm provides the foundation for a neural prosthetic controller and a framework to implement more detailed control of locomotion in the future. PMID:22328615
Barreira, Tiago V; Brouillette, Robert M; Foil, Heather C; Keller, Jeffrey N; Tudor-Locke, Catrine
2013-10-01
The purpose of this study was to compare the steps/d derived from the ActiGraph GT3X+ using the manufacturer's default filter (DF) and low-frequency-extension filter (LFX) with those from the NL-1000 pedometer in an older adult sample. Fifteen older adults (61-82 yr) wore a GT3X+ (24 hr/day) and an NL-1000 (waking hours) for 7 d. Day was the unit of analysis (n = 86 valid days) comparing (a) GT3X+ DF and NL-1000 steps/d and (b) GT3X+ LFX and NL-1000 steps/d. DF was highly correlated with NL-1000 (r = .80), but there was a significant mean difference (-769 steps/d). LFX and NL-1000 were highly correlated (r = .90), but there also was a significant mean difference (8,140 steps/d). Percent difference and absolute percent difference between DF and NL-1000 were -7.4% and 16.0%, respectively, and for LFX and NL-1000 both were 121.9%. Regardless of filter used, GT3X+ did not provide comparable pedometer estimates of steps/d in this older adult sample.
Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.
Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R
2018-06-01
The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P < 0.05). Five methods estimated more than 100% of actual steps; AG (no-LFE) on both wrists recorded 109% to 122% of steps (P > 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.
An adaptation to life in acid through a novel mevalonate pathway
Vinokur, Jeffrey M.; Cummins, Matthew C.; Korman, Tyler P.; ...
2016-12-22
Here, extreme acidophiles are capable of growth at pH values near zero. Sustaining life in acidic environments requires extensive adaptations of membranes, proton pumps, and DNA repair mechanisms. Here we describe an adaptation of a core biochemical pathway, the mevalonate pathway, in extreme acidophiles. Two previously known mevalonate pathways involve ATP dependent decarboxylation of either mevalonate 5-phosphate or mevalonate 5-pyrophosphate, in which a single enzyme carries out two essential steps: (1) phosphorylation of the mevalonate moiety at the 3-OH position and (2) subsequent decarboxylation. We now demonstrate that in extreme acidophiles, decarboxylation is carried out by two separate steps: previouslymore » identified enzymes generate mevalonate 3,5-bisphosphate and a new decarboxylase we describe here, mevalonate 3,5-bisphosphate decarboxylase, produces isopentenyl phosphate. Why use two enzymes in acidophiles when one enzyme provides both functionalities in all other organisms examined to date? We find that at low pH, the dual function enzyme, mevalonate 5-phosphate decarboxylase is unable to carry out the first phosphorylation step, yet retains its ability to perform decarboxylation. We therefore propose that extreme acidophiles had to replace the dual-purpose enzyme with two specialized enzymes to efficiently produce isoprenoids in extremely acidic environments.« less
An adaptation to life in acid through a novel mevalonate pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinokur, Jeffrey M.; Cummins, Matthew C.; Korman, Tyler P.
Here, extreme acidophiles are capable of growth at pH values near zero. Sustaining life in acidic environments requires extensive adaptations of membranes, proton pumps, and DNA repair mechanisms. Here we describe an adaptation of a core biochemical pathway, the mevalonate pathway, in extreme acidophiles. Two previously known mevalonate pathways involve ATP dependent decarboxylation of either mevalonate 5-phosphate or mevalonate 5-pyrophosphate, in which a single enzyme carries out two essential steps: (1) phosphorylation of the mevalonate moiety at the 3-OH position and (2) subsequent decarboxylation. We now demonstrate that in extreme acidophiles, decarboxylation is carried out by two separate steps: previouslymore » identified enzymes generate mevalonate 3,5-bisphosphate and a new decarboxylase we describe here, mevalonate 3,5-bisphosphate decarboxylase, produces isopentenyl phosphate. Why use two enzymes in acidophiles when one enzyme provides both functionalities in all other organisms examined to date? We find that at low pH, the dual function enzyme, mevalonate 5-phosphate decarboxylase is unable to carry out the first phosphorylation step, yet retains its ability to perform decarboxylation. We therefore propose that extreme acidophiles had to replace the dual-purpose enzyme with two specialized enzymes to efficiently produce isoprenoids in extremely acidic environments.« less
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Algorithm for automatic forced spirometry quality assessment: technological developments.
Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere
2014-01-01
We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.
Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E
2014-05-01
To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Defining Audience Segments for Extension Programming Using Reported Water Conservation Practices
ERIC Educational Resources Information Center
Monaghan, Paul; Ott, Emily; Wilber, Wendy; Gouldthorpe, Jessica; Racevskis, Laila
2013-01-01
A tool from social marketing can help Extension agents understand distinct audience segments among their constituents. Defining targeted audiences for Extension programming is a first step to influencing behavior change among the public. An online survey was conducted using an Extension email list for urban households receiving a monthly lawn and…
Limited rotation of the mobile-bearing in a rotating platform total knee prosthesis.
Garling, E H; Kaptein, B L; Nelissen, R G H H; Valstar, E R
2007-01-01
The hypothesis of this study was that the polyethylene bearing in a rotating platform total knee prosthesis shows axial rotation during a step-up motion, thereby facilitating the theoretical advantages of mobile-bearing knee prostheses. We examined 10 patients with rheumatoid arthritis who had a rotating platform total knee arthroplasty (NexGen LPS mobile, Zimmer Inc. Warsaw, USA). Fluoroscopic data was collected during a step-up motion six months postoperatively. A 3D-2D model fitting technique was used to reconstruct the in vivo 3D kinematics. The femoral component showed more axial rotation than the polyethylene mobile-bearing insert compared to the tibia during extension. In eight knees, the femoral component rotated internally with respect to the tibia during extension. In the other two knees the femoral component rotated externally with respect to the tibia. In all 10 patients, the femur showed more axial rotation than the mobile-bearing insert indicating the femoral component was sliding on the polyethylene of the rotating platform during the step-up motion. Possible explanations are a too limited conformity between femoral component and insert, the anterior located pivot location of the investigated rotating platform design, polyethylene on metal impingement and fibrous tissue formation between the mobile-bearing insert and the tibial plateau.
Ultrasound: a subexploited tool for sample preparation in metabolomics.
Luque de Castro, M D; Delgado-Povedano, M M
2014-01-02
Metabolomics, one of the most recently emerged "omics", has taken advantage of ultrasound (US) to improve sample preparation (SP) steps. The metabolomics-US assisted SP step binomial has experienced a dissimilar development that has depended on the area (vegetal or animal) and the SP step. Thus, vegetal metabolomics and US assisted leaching has received the greater attention (encompassing subdisciplines such as metallomics, xenometabolomics and, mainly, lipidomics), but also liquid-liquid extraction and (bio)chemical reactions in metabolomics have taken advantage of US energy. Also clinical and animal samples have benefited from US assisted SP in metabolomics studies but in a lesser extension. The main effects of US have been shortening of the time required for the given step, and/or increase of its efficiency or availability for automation; nevertheless, attention paid to potential degradation caused by US has been scant or nil. Achievements and weak points of the metabolomics-US assisted SP step binomial are discussed and possible solutions to the present shortcomings are exposed. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilbur, D. Scott
2011-12-23
This grant was a one-year extension of another grant with the same title (DE-FG03-98ER62572). The objective of the studies was to continue in vivo evaluation of reagents to determine which changes in structure were most favorable for in vivo use. The focus of our studies was development and optimization of reagents for pretargeting alpha-emitting radionuclides At-211 or Bi-213 to cancer cells. Testing of the reagents was conducted in vitro and in animal model systems. During the funding period, all three specific aims set out in the proposed studies were worked on, and some additional studies directed at development of amore » method for direct labeling of proteins with At-211 were investigated. We evaluated reagents in two different approaches in 'two step' pretargeting protocols. These approaches are: (1) delivery of the radionuclide on recombinant streptavidin to bind with pretargeted biotinylated monoclonal antibody (mAb), and alternatively, (2) delivery of the radionuclide on a biotin derivative to bind with pretargeted antibody-streptavidin conjugates. The two approaches were investigated as it was unclear which will be superior for the short half-lived alpha-emitting radionuclides.« less
An Ability-Based View of the Organization: Strategic-Resource and Contingency Domains
ERIC Educational Resources Information Center
Nobre, Farley Simon; Walker, David S.
2011-01-01
Purpose: This paper extends the corporation-based metaphor of the tree by proposing that cognition is the core ability that nourishes the development of core competencies. From such an extension, this paper aims to take a step forward to answer the question: what is the role of cognition in the organization that is in pursuit of core competencies…
Kohn, K W
1977-05-01
Bifunctional alkylating agents are known to cross-link DNA by simultaneously alkylating two guanine residues located on opposite strands. Despite this apparent requirement for bifunctionality, 1-(2-chloroethyl)-1-nitrosoureas bearing a single alkylating function were found to cross-link DNA in vitro. Cross-linking was demonstrated by showing inhibition of alkali-induced strand separation. Extensive cross-linking was observed in DNA treated with 1-(2-chloroethyl)-1-nitrosourea, 1,3-bis-(2-chloroethyl)-1-nitrosourea, and 1-(2-chloroethyl(-3-cyclohexyl-1-nitrosourea. The reaction occurs in two steps, an intital binding followed by a second step which can proceed after removal of unbound drug. It is suggested that the first step is chloroethylation of a nucleophilic site on one strand and that the second step involves displacement of Cl- by a nucleophilic site on the opposite strand, resulting in an ethyl bridge between the strands. Consistent with this possibility, 1-(2-fluoroethyl)-3-cyclohexyl-1-nitrosourea produced much less cross-linking, as expected from the known low activity of F-, compared with Cl-, as leaving group. 1-Methyl-1-nitrosourea, which is known to depurinate DNA, produced no detectable cross-linking.
Dondzila, Christopher J; Swartz, Ann M; Keenan, Kevin G; Harley, Amy E; Azen, Razia; Strath, Scott J
2016-12-01
The purpose of this study is to investigate whether an in-home, individually tailored intervention is efficacious in promoting increases in physical activity (PA) and improvements in physical functioning (PF) in low-active older adults. Participants were randomized to two groups for the 8-week intervention. The enhanced physical activity (EPA) group received individualized exercise programming, including personalized step goals and a resistance band training program, and the standard of care (SoC) group received a general activity goal. Pre- and post-intervention PF measures included choice step reaction time, knee extension/flexion strength, hand grip strength, and 8 ft up and go test completion time. Thirty-nine subjects completed this study (74.6 ± 6.4 years). Significant increases in steps/day were observed for both the EPA and SoC groups, although the improvements in the EPA group were significantly higher when including only those who adhered to weekly step goals. Both groups experienced significant PF improvements, albeit greater in the EPA group for the 8 ft up and go test and knee extension strength. A low cost, in-home intervention elicited improvements in both PA and PF. Future research is warranted to expand upon the size and scope of this study, exploring dose thresholds (and time frames) for PA to improve PF and strategies to further bolster adherence rates to maximize intervention benefits.
Hosseini, Elham; Janghorbani, Mohsen; Aminorroaya, Ashraf
2018-06-01
To study the incidence, risk factors, and pregnancy outcomes associated with gestational diabetes mellitus (GDM) diagnosed with one-step and two-step screening approaches. 1000 pregnant women who were eligible and consented to participate underwent fasting plasma glucose testing at the first prenatal visit (6-14 weeks). The women free from GDM or overt diabetes were screened at 24-28 weeks using the 50-g glucose challenge test (GCT) followed by 100-g, 3-h oral glucose tolerance test (OGTT) (two-step method). Regardless of the GCT result, all women underwent a 75-g, 2-h OGTT within one-week interval (one-step method). GDM incidence using the one-step and two-step methods was 9.3% (95% CI: 7.4-11.2) and 4.2% (95% CI: 2.9-5.5). GDM significantly increased the risk of macrosomia, gestational hypertension, preeclampsia, and cesarean section and older age and family history of diabetes significantly increased the risk of developing GDM in both approaches. In two-step method, higher pre-pregnancy body mass index and lower physical activity during pregnancy along with higher earlier cesarean section also increased significantly the risk of developing GDM. Despite a higher incidence of GDM using the one-step approach, more risk factors for and a stronger effect of GDM on adverse pregnancy outcomes were found when using the two-step approach. Longer follow-up of women with and without GDM may change the results using both approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Structural Controls of the Tuscarora Geothermal Field, Elko County, Nevada
NASA Astrophysics Data System (ADS)
Dering, Gregory M.
Detailed geologic mapping, structural analysis, and well data have been integrated to elucidate the stratigraphic framework and structural setting of the Tuscarora geothermal area. Tuscarora is an amagmatic geothermal system that lies in the northern part of the Basin and Range province, ˜15 km southeast of the Snake River Plain and ˜90 km northwest of Elko, Nevada. The Tuscarora area is dominated by late Eocene to middle Miocene volcanic and sedimentary rocks, all overlying Paleozoic metasedimentary rocks. A geothermal power plant was constructed in 2011 and currently produces 18 MWe from an ˜170°C reservoir in metasedimentary rocks at a depth of 1740 m. Analysis of drill core reveals that the subsurface geology is dominated to depths of ˜700-1000 m by intracaldera deposits of the Eocene Big Cottonwood Canyon caldera, including blocks of basement-derived megabreccia. Furthermore, the Tertiary-Paleozoic nonconformity within the geothermal field has been recognized as the margin of this Eocene caldera. Structural relations combined with geochronologic data from previous studies indicate that Tuscarora has undergone extension since the late Eocene, with significant extension in the late Miocene-Pliocene to early Pleistocene. Kinematic analysis of fault slip data reveal an east-west-trending least principal paleostress direction, which probably reflects an earlier episode of Miocene extension. Two distinct structural settings at different scales appear to control the geothermal field. The regional structural setting is a 10-km wide complexly faulted left step or relay ramp in the west-dipping range-bounding Independence-Bull Run Mountains normal fault system. Geothermal activity occurs within the step-over where sets of east- and west-dipping normal faults overlap in a northerly trending accommodation zone. The distribution of hot wells and hydrothermal surface features, including boiling springs, fumaroles, and siliceous sinter, indicate that the geothermal system is restricted to the narrow (< 1 km) axial part of the accommodation zone, where permeability is maintained at depth around complex fault intersections. Shallow up-flow appears to be focused along several closely spaced steeply west-dipping north-northeast-striking normal faults within the axial part of the accommodation zone. These faults are favorably oriented for extension and fluid flow under the present-day northwest-trending regional extension direction indicated by previous studies of GPS geodetic data, earthquake focal mechanisms, and kinematic data from late Quaternary faults. The recognition of the axial part of an accommodation zone as a favorable structural setting for geothermal activity may be a useful exploration tool for development of drilling targets in extensional terranes, as well as for developing geologic models of known geothermal fields. Preliminary analysis of broad step-overs similar to Tuscarora reveals that geothermal activity occurs in a variety of subsidiary structural settings within these regions. In addition, the presence of several high-temperature systems in northeastern Nevada demonstrates the viability of electrical-grade geothermal activity in this region despite low present-day strain rates as indicated by GPS geodetic data. Geothermal exploration potential in northeastern Nevada may therefore be higher than previously recognized.
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A.; Rusyn, Ivan; Tropsha, Alexander
2009-01-01
Background Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Objective A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. Methods and results A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD50 values from chemical descriptors. All models were extensively validated using special protocols. Conclusions The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity. PMID:19672406
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A; Rusyn, Ivan; Tropsha, Alexander
2009-08-01
Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public-private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC(50)) and in vivo rodent median lethal dose (LD(50)) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure-activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD(50) values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC(50) and LD(50). However, a linear IC(50) versus LD(50) correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC(50) and LD(50) values: One group comprises compounds with linear IC(50) versus LD(50) relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD(50) values from chemical descriptors. All models were extensively validated using special protocols. The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity.
Extension in Planned Social Change, the Indian Experience.
ERIC Educational Resources Information Center
Rudramoorthy, B.
Extension, the process of extending the knowledge of recent advances in science and technology to the people who need it, has been emphasized in India since the introduction of the Community Development Programme in 1952. Community development involves two distinct processes--extension education and community organization--and has had four…
Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S.; Thwaites, David I.; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar
2016-01-01
Abstract The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP ‘Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques’ was conducted in 2009–2012 as an extension of previously developed audit programs. Material and methods. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. Results. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Discussion. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs. PMID:26934916
Izewska, Joanna; Wesolowska, Paulina; Azangwe, Godfrey; Followill, David S; Thwaites, David I; Arib, Mehenna; Stefanic, Amalia; Viegas, Claudio; Suming, Luo; Ekendahl, Daniela; Bulski, Wojciech; Georg, Dietmar
2016-07-01
The International Atomic Energy Agency (IAEA) has a long tradition of supporting development of methodologies for national networks providing quality audits in radiotherapy. A series of co-ordinated research projects (CRPs) has been conducted by the IAEA since 1995 assisting national external audit groups developing national audit programs. The CRP 'Development of Quality Audits for Radiotherapy Dosimetry for Complex Treatment Techniques' was conducted in 2009-2012 as an extension of previously developed audit programs. The CRP work described in this paper focused on developing and testing two steps of dosimetry audit: verification of heterogeneity corrections, and treatment planning system (TPS) modeling of small MLC fields, which are important for the initial stages of complex radiation treatments, such as IMRT. The project involved development of a new solid slab phantom with heterogeneities containing special measurement inserts for thermoluminescent dosimeters (TLD) and radiochromic films. The phantom and the audit methodology has been developed at the IAEA and tested in multi-center studies involving the CRP participants. The results of multi-center testing of methodology for two steps of dosimetry audit show that the design of audit procedures is adequate and the methodology is feasible for meeting the audit objectives. A total of 97% TLD results in heterogeneity situations obtained in the study were within 3% and all results within 5% agreement with the TPS predicted doses. In contrast, only 64% small beam profiles were within 3 mm agreement between the TPS calculated and film measured doses. Film dosimetry results have highlighted some limitations in TPS modeling of small beam profiles in the direction of MLC leave movements. Through multi-center testing, any challenges or difficulties in the proposed audit methodology were identified, and the methodology improved. Using the experience of these studies, the participants could incorporate the auditing procedures in their national programs.
NASA Astrophysics Data System (ADS)
Berry Bertram, Kathryn
2011-12-01
The Geophysical Institute (GI) Framework for Professional Development was designed to prepare culturally responsive teachers of science, technology, engineering, and math (STEM). Professional development programs based on the framework are created for rural Alaskan teachers who instruct diverse classrooms that include indigenous students. This dissertation was written in response to the question, "Under what circumstances is the GI Framework for Professional Development effective in preparing culturally responsive teachers of science, technology, engineering, and math?" Research was conducted on two professional development programs based on the GI Framework: the Arctic Climate Modeling Program (ACMP) and the Science Teacher Education Program (STEP). Both programs were created by backward design to student learning goals aligned with Alaska standards and rooted in principles of indigenous ideology. Both were created with input from Alaska Native cultural knowledge bearers, Arctic scientists, education researchers, school administrators, and master teachers with extensive instructional experience. Both provide integrated instruction reflective of authentic Arctic research practices, and training in diverse methods shown to increase indigenous student STEM engagement. While based on the same framework, these programs were chosen for research because they offer distinctly different training venues for K-12 teachers. STEP offered two-week summer institutes on the UAF campus for more than 175 teachers from 33 Alaska school districts. By contrast, ACMP served 165 teachers from one rural Alaska school district along the Bering Strait. Due to challenges in making professional development opportunities accessible to all teachers in this geographically isolated district, ACMP offered a year-round mix of in-person, long-distance, online, and local training. Discussion centers on a comparison of the strategies used by each program to address GI Framework cornerstones, on methodologies used to conduct program research, and on findings obtained. Research indicates that in both situations the GI Framework for Professional Development was effective in preparing culturally responsive STEM teachers. Implications of these findings and recommendations for future research are discussed in the conclusion.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Mitchell, Kimberly J; Finkelhor, David; Becker-Blease, Kathryn A
2007-06-01
This article utilizes data from clinical reports of 929 adults to examine whether various problematic Internet experiences are distinctly different from or extensions of conventional problems. A TwoStep Cluster Analysis identified three mutually exclusive groups of adults, those with (1) online relationship problems and victimization; (2) online and offline problems; and (3) marital discord. Results suggest some initial support for the idea that problematic Internet experiences are often extensions of experiences and behaviors that pre-date the Internet. However, the Internet may be introducing some qualitatively new dimensions-such as an increased severity, an increased frequency, or unique dynamics-that require new responses or interventions.
Predicting the Extension of Biomedical Ontologies
Pesquita, Catia; Couto, Francisco M.
2012-01-01
Developing and extending a biomedical ontology is a very demanding task that can never be considered complete given our ever-evolving understanding of the life sciences. Extension in particular can benefit from the automation of some of its steps, thus releasing experts to focus on harder tasks. Here we present a strategy to support the automation of change capturing within ontology extension where the need for new concepts or relations is identified. Our strategy is based on predicting areas of an ontology that will undergo extension in a future version by applying supervised learning over features of previous ontology versions. We used the Gene Ontology as our test bed and obtained encouraging results with average f-measure reaching 0.79 for a subset of biological process terms. Our strategy was also able to outperform state of the art change capturing methods. In addition we have identified several issues concerning prediction of ontology evolution, and have delineated a general framework for ontology extension prediction. Our strategy can be applied to any biomedical ontology with versioning, to help focus either manual or semi-automated extension methods on areas of the ontology that need extension. PMID:23028267
Gao, Yong; Liao, Xin-Hong; Ma, Yan; Lu, Lu; Wei, Li-Yan; Yan, Xue
2017-12-01
This study aims to investigate the feasibility and performance of a two-step scoring system of ultrasound imaging in the diagnosis of prostate cancer. 75 patients with 888 consecutive histopathologically verified lesions were included in this study. Step 1, an initial 5-point scoring system was developed based on conventional transrectal ultrasound (TRUS). Step 2, a final scoring system was evaluated according to contrast-enhanced transrectal ultrasound (CE-TRUS). Each lesion was evaluated using the two-step scoring system (step 1 + step 2) and compared with only using conventional TRUS (step 1). 888 lesions were histologically verified: 315 of them were prostate cancer from 46 patients and 573 were benign prostatic hypertrophy (BPH) from 29 patients. According to the two-step scoring system, 284 lesions were upgraded and 130 lesions were downgraded from step 1 to step 2 (this means using step 2 to assess the results by step 1). However, 96 cases were improperly upgraded after step 2 and 48 malignant lesions were still missed after step 2 as score-1. For the two-step scoring system, the sensitivity, specificity, and accuracy were 84.7%, 83.2%, and 83.7%, respectively, versus 22.8%, 96.6%, and 70.4%, respectively, for conventional TRUS. The area under the ROC curve (AUC) for lesion diagnosis was 0.799-0.952 for the two-step scoring system, versus 0.479-0.712 for conventional TRUS. The difference in the diagnostic accuracy of the two-step scoring system and conventional TRUS was statistically significant (P<0.0001). The two-step scoring system was straightforward to use and achieved a considerably accurate diagnostic performance for prostate cancer. The application of the two-step scoring system for prostate cancer is promising.
Protection of the Space Environment: The First Small Steps
NASA Astrophysics Data System (ADS)
Williamson, M.
The exploration of the space environment - by robotic and manned missions - is a natural extension of mankind's desire to explore his own planet. Likewise, the development of the space environment - for industry, commerce and tourism - is a natural extension of our current business and domestic environment. Unfortunately, it appears that our ability to pollute, degrade and even destroy aspects of the space environment is also an extension of an ability we have developed and practised here on Earth. This paper reviews the evidence of mankind's pollution of the space environment - which includes the planetary bodies - in the first 45 years of the Space Age, and extrapolates the potential for further degradation into its second half-century. It considers the future development of both scientific exploration and commercial exploitation - in orbit and on the surface of the planetary bodies - and the possible detrimental effects. In presenting the case for protection of the space environment, the paper makes recommendations concerning the first steps towards a solution to the problem. Among other things, it calls for the formation of an international consultative body, to consider the issues relevant to `Protection of the Space Environment' and to raise awareness of the subject among the growing body of space professionals and practitioners. It also recommends consideration of a `set of guidelines' or `code of practice' as a precursor to more formal policies or legislation. In doing so, however, it is careful to recognise the need to strike a balance between unbridled exploration and development, and a stifling regime of rules and regulations. The discussion of this subject requires a good deal more collective knowledge, understanding and maturity than has been evident in similar discussions regarding the Earth's environment. At present, that knowledge resides largely within the professional space community. Thus there is also a need for promulgation, both within and beyond that community. As the space frontier becomes accessible to a wider variety of individuals, corporations and other bodies, the requirement for protection of the space environment grows. If the space environment is to remain available for the study of and use by successive generations of explorers and developers, we must make the first steps towards protection now. In another twenty years or so - when the second generation of lunar explorers is making footprints on the surface - it may be too late.
Extreme events as foundation of Lévy walks with varying velocity
NASA Astrophysics Data System (ADS)
Kutner, Ryszard
2002-11-01
In this work we study the role of extreme events [E.W. Montroll, B.J. West, in: J.L. Lebowitz, E.W. Montrell (Eds.), Fluctuation Phenomena, SSM, vol. VII, North-Holland, Amsterdam, 1979, p. 63; J.-P. Bouchaud, M. Potters, Theory of Financial Risks from Statistical Physics to Risk Management, Cambridge University Press, Cambridge, 2001; D. Sornette, Critical Phenomena in Natural Sciences. Chaos, Fractals, Selforganization and Disorder: Concepts and Tools, Springer, Berlin, 2000] in determining the scaling properties of Lévy walks with varying velocity. This model is an extension of the well-known Lévy walks one [J. Klafter, G. Zumofen, M.F. Shlesinger, in M.F. Shlesinger, G.M. Zaslavsky, U. Frisch (Eds.), Lévy Flights and Related Topics ion Physics, Lecture Notes in Physics, vol. 450, Springer, Berlin, 1995, p. 196; G. Zumofen, J. Klafter, M.F. Shlesinger, in: R. Kutner, A. Pȩkalski, K. Sznajd-Weron (Eds.), Anomalous Diffusion. From Basics to Applications, Lecture Note in Physics, vol. 519, Springer, Berlin, 1999, p. 15] introduced in the context of chaotic dynamics where a fixed value of the walker velocity is assumed for simplicity. Such an extension seems to be necessary when the open and/or complex system is studied. The model of Lévy walks with varying velocity is spanned on two coupled velocity-temporal hierarchies: the first one consisting of velocities and the second of corresponding time intervals which the walker spends between the successive turning points. Both these hierarchical structures are characterized by their own self-similar dimensions. The extreme event, which can appear within a given time interval, is defined as a single random step of the walker having largest length. By finding power-laws which describe the time-dependence of this displacement and its statistics we obtained two independent diffusion exponents, which are related to the above-mentioned dimensions and which characterize the extreme event kinetics. In this work we show the principal influence of extreme events on the basic quantities (one-step distributions and moments as well as two-step correlation functions) of the continuous-time random walk formalism. Besides, we construct both the waiting-time distribution and sojourn probability density directly in a real space and time in the scaling form by proper component analysis which takes into account all possible fluctuations of the walker steps in contrast to the extreme event analysis. In this work we pay our attention to the basic quantities, since the summarized multi-step ones were already discussed earlier [Physica A 264 (1999) 107; Comp. Phys. Commun. 147 (2002) 565]. Moreover, we study not only the scaling phenomena but also, assuming a finite number of hierarchy levels, the breaking of scaling and its dependence on control parameters. This seems to be important for studying empirical systems the more so as there are still no closed formulae describing this phenomenon except the one for truncated Lévy flights [Phys. Rev. Lett. 73 (1994) 2946]. Our formulation of the model made possible to develop an efficient Monte Carlo algorithm [Physica A 264 (1999) 107; Comp. Phys. Commun. 147 (2002) 565] where no MC step is lost.
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
Finite cohesion due to chain entanglement in polymer melts.
Cheng, Shiwang; Lu, Yuyuan; Liu, Gengxin; Wang, Shi-Qing
2016-04-14
Three different types of experiments, quiescent stress relaxation, delayed rate-switching during stress relaxation, and elastic recovery after step strain, are carried out in this work to elucidate the existence of a finite cohesion barrier against free chain retraction in entangled polymers. Our experiments show that there is little hastened stress relaxation from step-wise shear up to γ = 0.7 and step-wise extension up to the stretching ratio λ = 1.5 at any time before or after the Rouse time. In contrast, a noticeable stress drop stemming from the built-in barrier-free chain retraction is predicted using the GLaMM model. In other words, the experiment reveals a threshold magnitude of step-wise deformation below which the stress relaxation follows identical dynamics whereas the GLaMM or Doi-Edwards model indicates a monotonic acceleration of the stress relaxation dynamics as a function of the magnitude of the step-wise deformation. Furthermore, a sudden application of startup extension during different stages of stress relaxation after a step-wise extension, i.e. the delayed rate-switching experiment, shows that the geometric condensation of entanglement strands in the cross-sectional area survives beyond the reptation time τd that is over 100 times the Rouse time τR. Our results point to the existence of a cohesion barrier that can prevent free chain retraction upon moderate deformation in well-entangled polymer melts.
Numerical Simulation with Experimental Validation of the Draping Behavior of Woven Fabrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, William; Pasupuleti, Praveen; Zhao, Selina
Woven fabric composites are extensively used in molding complex geometrical shapes due to their high conformability compared to other fabrics. Preforming is an important step in the overall process. In this step, the two-dimensional fabric is draped to become the three-dimensional shape of the part prior to resin injection. During preforming, the orientation of the tows may change significantly compared to the initial orientations. Accurate prediction of the tow orientations after molding is important for evaluating the structural performance of the final part. This paper investigates the fiber angle changes for carbon fiber woven fabrics during draping over a truncatedmore » pyramid tool designed and fabricated at the General Motors Research Labs. This aspect of study is a subset of the broad study conducted under the purview of a Department of Energy project funded to GM in developing state of the art computational tools for integrated manufacturing and structural performance prediction of carbon fiber composites. Fabric bending, picture frame testing, and bias-extension evaluations were carried out to determine the material parameters for these fabrics. The PAM-FORM computer program was used to model the draping behavior of these fabrics. Following deformation, fiber angle changes at different locations on the truncated pyramid were measured experimentally. The predicted angles matched the experimental results well as measured along the centerline and at several different locations on the deformed fabric. Details of the test methods used as well as the numerical results with various simulation parameters will be provided.« less
Wide step width reduces knee abduction moment of obese adults during stair negotiation.
Yocum, Derek; Weinhandl, Joshua T; Fairbrother, Jeffrey T; Zhang, Songning
2018-05-15
An increased likelihood of developing obesity-related knee osteoarthritis may be associated with increased peak internal knee abduction moments (KAbM). Increases in step width (SW) may act to reduce this moment. The purpose of this study was to determine the effects of increased SW on knee biomechanics during stair negotiation of healthy-weight and obese participants. Participants (24: 10 obese and 14 healthy-weight) used stairs and walked over level ground while walking at their preferred speed in two different SW conditions - preferred and wide (200% preferred). A 2 × 2 (group × condition) mixed model analysis of variance was performed to analyze differences between groups and conditions (p < 0.05). Increased SW increased the loading-response peak knee extension moment during descent and level gait, decreased loading-response KAbMs, knee extension and abduction range of motion (ROM) during ascent, and knee adduction ROM during descent. Increased SW increased loading-response peak mediolateral ground reaction force (GRF), increased peak knee abduction angle during ascent, and decreased peak knee adduction angle during descent and level gait. Obese participants experienced disproportionate changes in loading-response mediolateral GRF, KAbM and peak adduction angle during level walking, and peak knee abduction angle and ROM during ascent. Increased SW successfully decreased loading-response peak KAbM. Implications of this finding are that increased SW may decrease medial compartment knee joint loading, decreasing pain and reducing joint deterioration. Increased SW influenced obese and healthy-weight participants differently and should be investigated further. Copyright © 2018. Published by Elsevier Ltd.
Creating patient safety capacity in a nation's health system: A comparison between Israel and Canada
2012-01-01
Injuries to patients by the healthcare system (i.e., adverse events) are common and their impact on individuals and systems is considerable. Over the last decade, extensive efforts have been made worldwide to improve patient safety. Given the complexity and extent of the activities required to address the issue, coordinating and organizing them at a national level is likely beneficial. Whereas some capacity and expertise already exist in Israel, there is a considerable gap that needs to be filled. In this paper two countries, Canada and Israel, are examined and some of the essential steps for any country are considered. Possible immediate next steps for Israel are suggested. PMID:22913865
Image encryption using a synchronous permutation-diffusion technique
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi; Altameem, Ayman; Lee, Malrey
2017-03-01
In the past decade, the interest on digital images security has been increased among scientists. A synchronous permutation and diffusion technique is designed in order to protect gray-level image content while sending it through internet. To implement the proposed method, two-dimensional plain-image is converted to one dimension. Afterward, in order to reduce the sending process time, permutation and diffusion steps for any pixel are performed in the same time. The permutation step uses chaotic map and deoxyribonucleic acid (DNA) to permute a pixel, while diffusion employs DNA sequence and DNA operator to encrypt the pixel. Experimental results and extensive security analyses have been conducted to demonstrate the feasibility and validity of this proposed image encryption method.
The Influence of Task Complexity on Knee Joint Kinetics Following ACL Reconstruction
Schroeder, Megan J.; Krishnan, Chandramouli; Dhaher, Yasin Y.
2015-01-01
Background Previous research indicates that subjects with anterior cruciate ligament reconstruction exhibit abnormal knee joint movement patterns during functional activities like walking. While the sagittal plane mechanics have been studied extensively, less is known about the secondary planes, specifically with regard to more demanding tasks. This study explored the influence of task complexity on functional joint mechanics in the context of graft-specific surgeries. Methods In 25 participants (10 hamstring tendon graft, 6 patellar tendon graft, 9 matched controls), three-dimensional joint torques were calculated using a standard inverse dynamics approach during level walking and stair descent. The stair descent task was separated into two functionally different sub-tasks—step-to-floor and step-to-step. The differences in external knee moment profiles were compared between groups; paired differences between the reconstructed and non-reconstructed knees were also assessed. Findings The reconstructed knees, irrespective of graft type, typically exhibited significantly lower peak knee flexion moments compared to control knees during stair descent, with the differences more pronounced in the step-to-step task. Frontal plane adduction torque deficits were graft-specific and limited to the hamstring tendon knees during the step-to-step task. Internal rotation torque deficits were also primarily limited to the hamstring tendon graft group during stair descent. Collectively, these results suggest that task complexity was a primary driver of differences in joint mechanics between anterior cruciate ligament reconstructed individuals and controls, and such differences were more pronounced in individuals with hamstring tendon grafts. Interpretation The mechanical environment experienced in the cartilage during repetitive, cyclical tasks such as walking and other activities of daily living has been argued to contribute to the development of degenerative changes to the joint and ultimately osteoarthritis. Given the task-specific and graft-specific differences in joint mechanics detected in this study, care should be taken during the rehabilitation process to mitigate these changes. PMID:26101055
Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi
2017-07-21
Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.
Knee Joint Kinematics and Kinetics During a Lateral False-Step Maneuver
Golden, Grace M.; Pavol, Michael J.; Hoffman, Mark A.
2009-01-01
Abstract Context: Cutting maneuvers have been implicated as a mechanism of noncontact anterior cruciate ligament (ACL) injuries in collegiate female basketball players. Objective: To investigate knee kinematics and kinetics during running when the width of a single step, relative to the path of travel, was manipulated, a lateral false-step maneuver. Design: Crossover design. Setting: University biomechanics laboratory. Patients or Other Participants: Thirteen female collegiate basketball athletes (age = 19.7 ± 1.1 years, height = 172.3 ± 8.3 cm, mass = 71.8 ± 8.7 kg). Intervention(s): Three conditions: normal straight-ahead running, lateral false step of width 20% of body height, and lateral false step of width 35% of body height. Main Outcome Measure(s): Peak angles and internal moments for knee flexion, extension, abduction, adduction, internal rotation, and external rotation. Results: Differences were noted among conditions in peak knee angles (flexion [P < .01], extension [P = .02], abduction [P < .01], and internal rotation [P < .01]) and peak internal knee moments (abduction [P < .01], adduction [P < .01], and internal rotation [P = .03]). The lateral false step of width 35% of body height was associated with larger peak flexion, abduction, and internal rotation angles and larger peak abduction, adduction, and internal rotation moments than normal running. Peak flexion and internal rotation angles were also larger for the lateral false step of width 20% of body height than for normal running, whereas peak extension angle was smaller. Peak internal rotation angle increased progressively with increasing step width. Conclusions: Performing a lateral false-step maneuver resulted in changes in knee kinematics and kinetics compared with normal running. The differences observed for lateral false steps were consistent with proposed mechanisms of ACL loading, suggesting that lateral false steps represent a hitherto neglected mechanism of noncontact ACL injury. PMID:19771289
Posse, Viktor; Hoberg, Emily; Dierckx, Anke; Shahzad, Saba; Koolmeister, Camilla; Larsson, Nils-Göran; Wilhelmsson, L. Marcus; Hällberg, B. Martin; Gustafsson, Claes M.
2014-01-01
Mammalian mitochondrial transcription is executed by a single subunit mitochondrial RNA polymerase (Polrmt) and its two accessory factors, mitochondrial transcription factors A and B2 (Tfam and Tfb2m). Polrmt is structurally related to single-subunit phage RNA polymerases, but it also contains a unique N-terminal extension (NTE) of unknown function. We here demonstrate that the NTE functions together with Tfam to ensure promoter-specific transcription. When the NTE is deleted, Polrmt can initiate transcription in the absence of Tfam, both from promoters and non-specific DNA sequences. Additionally, when in presence of Tfam and a mitochondrial promoter, the NTE-deleted mutant has an even higher transcription activity than wild-type polymerase, indicating that the NTE functions as an inhibitory domain. Our studies lead to a model according to which Tfam specifically recruits wild-type Polrmt to promoter sequences, relieving the inhibitory effect of the NTE, as a first step in transcription initiation. In the second step, Tfb2m is recruited into the complex and transcription is initiated. PMID:24445803
Further Progress Applying the Generalized Wigner Distribution to Analysis of Vicinal Surfaces
NASA Astrophysics Data System (ADS)
Einstein, T. L.; Richards, Howard L.; Cohen, S. D.
2001-03-01
Terrace width distributions (TWDs) can be well fit by the generalized Wigner distribution (GWD), generally better than by conventional Gaussians, and thus offers a convenient way to estimate the dimensionless elastic repulsion strength tildeA from σ^2, the TWD variance.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999) The GWD σ^2 accurately reproduces values for the two exactly soluble cases at small tildeA and in the asymptotic limit. Taxing numerical simulations show that the GWD σ^2 interpolates well between these limits. Extensive applications have been made to experimental data, esp. on Cu.(M. Giesen and T.L. Einstein, Surface Sci. 449), 191 (2000) Recommended analysis procedures are catalogued.(H.L. Richards, S.D. Cohen, TLE, & M. Giesen, Surf Sci 453), 59 (2000) Extensions of the GWD for multistep distributions are tested, with good agreement for second-neighbor distributions, less good for third.(TLE, HLR, SDC, & OP-L, Proc ISSI-PDSC2000, cond-mat/0012xxxxx) Alternatively, step-step correlation functions, about which there is more theoretical information, should be measured.
[Extensive treatment of teacher's voice disorders in health spa].
Niebudek-Bogusz, Ewa; Marszałek, Sławomir; Woźnicka, Ewelina; Minkiewicz, Zofia; Hima, Joanna; Sliwińska-Kowalska, Mariola
2010-01-01
Treatment in a health spa with proper infrastructure and professional medical care can provide optimal conditions for intensive voice rehabilitation, especially for people with occupational voice disorders. The most numerous group of people with voice disorders are teachers. In Poland, they have an opportunity to take care of, or regain, their health during a one-year paid leave. The authors describe a multi-specialist model of extensive treatment of voice disorders in a health spa, including holistic and interdisciplinary procedures in occupational dysphonia. Apart from balneotherapy, the spa treatment includes vocal training exercises, relaxation exercises, elements of physiotherapy with the larynx manual therapy and psychological workshops. The voice rehabilitation organized already for two groups of teachers has been received with great satisfaction by this occupational group. The implementation of a model program of extensive treatment of voice disorders in a health spa should become one of the steps aimed at preventing occupational voice diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feygelman, Vladimir; Department of Physics, University of Manitoba, Winnipeg, MB; Mandelzweig, Yuri
2015-01-15
Matching electron beams without secondary collimators (applicators) were used for treatment of extensive, recurrent chest-wall carcinoma. Due to the wide penumbra of such beams, the homogeneity of the dose distribution at and around the junction point is clinically acceptable and relatively insensitive to positional errors. Specifically, dose around the junction point is homogeneous to within ±4% as calculated from beam profiles, while the positional error of 1 cm leaves this number essentially unchanged. The experimental isodose distribution in an anthropomorphic phantom supports this conclusion. Two electron beams with wide penumbra were used to cover the desired treatment area with satisfactorymore » dose homogeneity. The technique is relatively simple yet clinically useful and can be considered a viable alternative for treatment of extensive chest-wall disease. The steps are suggested to make this technique more universal.« less
Production and Beyond: A Defining Moment for Public Sector Extension
ERIC Educational Resources Information Center
Rivera, William M.
2009-01-01
Two imperatives form the basis of the present paper. The first is the market-driven imperative, vital to production and value-chain development. The second is the knowledge imperative, central to the advancement of human capacity and institutional development. In view of these two imperatives, this paper argues for overhaul in extension toward a…
Regulation of cerebral cortex development by Rho GTPases: insights from in vivo studies
Azzarelli, Roberta; Kerloch, Thomas; Pacary, Emilie
2015-01-01
The cerebral cortex is the site of higher human cognitive and motor functions. Histologically, it is organized into six horizontal layers, each containing unique populations of molecularly and functionally distinct excitatory projection neurons and inhibitory interneurons. The stereotyped cellular distribution of cortical neurons is crucial for the formation of functional neural circuits and it is predominantly established during embryonic development. Cortical neuron development is a multiphasic process characterized by sequential steps of neural progenitor proliferation, cell cycle exit, neuroblast migration and neuronal differentiation. This series of events requires an extensive and dynamic remodeling of the cell cytoskeleton at each step of the process. As major regulators of the cytoskeleton, the family of small Rho GTPases has been shown to play essential functions in cerebral cortex development. Here we review in vivo findings that support the contribution of Rho GTPases to cortical projection neuron development and we address their involvement in the etiology of cerebral cortex malformations. PMID:25610373
Policy Implications of Air Quality Research
NASA Astrophysics Data System (ADS)
Sheinbaum, C.
2004-12-01
While an integrated assessment approach will be required to achieve and sustain improvements in the air quality of Mexico City Metropolitan Area's (MCMA), policy strategies must be based on a solid understanding of the pollutant emissions and atmospheric processes that lead to unacceptable levels of air pollution. The required level of understanding can only be achieved by comprehensive atmospheric measurements followed by a coordinated atmospheric modeling program. The innovative, two-phase atmospheric measurement program, which was a collaborative effort between Massachusetts Institute of Technology and the Mexican Metropolitan Environmental Commission, with exploratory measurements in February 2002 and extensive measurements from late March through early May of 2003, was an important step towards meeting these requirements. Although the extensive data sets from the two measurement programs are still being analyzed by the investigators, their preliminary analysis efforts have yielded important insights into the nature and extent of air pollution problem in the MCMA, which in turn will have important policy implications.
2015-05-16
synthesis of iron magnetic nanoparticles is being investigated (Appendix A; Scheme IV). In the first step, precursor iron(III) chloride nanoparticles...and other methods. Currently, we are developing a two-step scheme for the synthesis of esters that will require distillation and/or column...recognize the link between them. We are developing for the above purpose, the microwave-assisted, two-step synthesis of high boiling point esters. The
A Coding Method for Efficient Subgraph Querying on Vertex- and Edge-Labeled Graphs
Zhu, Lei; Song, Qinbao; Guo, Yuchen; Du, Lei; Zhu, Xiaoyan; Wang, Guangtao
2014-01-01
Labeled graphs are widely used to model complex data in many domains, so subgraph querying has been attracting more and more attention from researchers around the world. Unfortunately, subgraph querying is very time consuming since it involves subgraph isomorphism testing that is known to be an NP-complete problem. In this paper, we propose a novel coding method for subgraph querying that is based on Laplacian spectrum and the number of walks. Our method follows the filtering-and-verification framework and works well on graph databases with frequent updates. We also propose novel two-step filtering conditions that can filter out most false positives and prove that the two-step filtering conditions satisfy the no-false-negative requirement (no dismissal in answers). Extensive experiments on both real and synthetic graphs show that, compared with six existing counterpart methods, our method can effectively improve the efficiency of subgraph querying. PMID:24853266
Design improvement of a pump wear ring labyrinth seal
NASA Technical Reports Server (NTRS)
Rhode, David L.; Morrison, G. L.; Ko, S. H.; Waughtal, S. P.
1987-01-01
The investigation was successful in obtaining two improved designs for the impeller wear ring seal of the liquid hydrogen turbopump of interest. A finite difference computer code was extensively used in a parametric computational study in determining a cavity configuration with high flow resistance due to turbulence dissipation. These two designs, along with that currently used, were fabricated and tested. The improved designs were denoted Type O and Type S. The measurements showed that Type O and Type S given 67 and 30 percent reduction in leakage over the current design, respectively. It was found that the number of cavities, the step height and the presence of a small stator groove are quite important design features. Also, the tooth thickness is of some significance. Finally, the tooth height and an additional large cavity cut out from the stator (upstream of the step) are of negligible importance.
Medeiros, Michelle; Wanderlind, Eduardo H; Mora, José R; Moreira, Raphaell; Kirby, Anthony J; Nome, Faruk
2013-10-07
Hydroxylamine reacts as an oxygen nucleophile, most likely via its ammonia oxide tautomer, towards both phosphate di- and triesters of 2-hydroxypyridine. But the reactions are very different. The product of the two-step reaction with the triester TPP is trapped by the NH2OH present in solution to generate diimide, identified from its expected disproportionation and trapping products. The reaction with H3N(+)-O(-) shows general base catalysis, which calculations show is involved in the breakdown of the phosphorane addition-intermediate of a two-step reaction. The reactivity of the diester anion DPP(-) is controlled by its more basic pyridyl N. Hydroxylamine reacts preferentially with the substrate zwitterion DPP(±) to displace first one then a second 2-pyridone, in concerted S(N)2(P) reactions, forming O-phosphorylated products which are readily hydrolysed to inorganic phosphate. The suggested mechanisms are tested and supported by extensive theoretical calculations.
Collaborative Distributed Scheduling Approaches for Wireless Sensor Network
Niu, Jianjun; Deng, Zhidong
2009-01-01
Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491
Sequence dependency of canonical base pair opening in the DNA double helix
Villa, Alessandra
2017-01-01
The flipping-out of a DNA base from the double helical structure is a key step of many cellular processes, such as DNA replication, modification and repair. Base pair opening is the first step of base flipping and the exact mechanism is still not well understood. We investigate sequence effects on base pair opening using extensive classical molecular dynamics simulations targeting the opening of 11 different canonical base pairs in two DNA sequences. Two popular biomolecular force fields are applied. To enhance sampling and calculate free energies, we bias the simulation along a simple distance coordinate using a newly developed adaptive sampling algorithm. The simulation is guided back and forth along the coordinate, allowing for multiple opening pathways. We compare the calculated free energies with those from an NMR study and check assumptions of the model used for interpreting the NMR data. Our results further show that the neighboring sequence is an important factor for the opening free energy, but also indicates that other sequence effects may play a role. All base pairs are observed to have a propensity for opening toward the major groove. The preferred opening base is cytosine for GC base pairs, while for AT there is sequence dependent competition between the two bases. For AT opening, we identify two non-canonical base pair interactions contributing to a local minimum in the free energy profile. For both AT and CG we observe long-lived interactions with water and with sodium ions at specific sites on the open base pair. PMID:28369121
NASA Technical Reports Server (NTRS)
1984-01-01
The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.
Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots
NASA Astrophysics Data System (ADS)
Chellal, R.; Cuvillon, L.; Laroche, E.
2017-04-01
This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.
Szatmari, I; Tókés, S; Dunn, C B; Bardos, T J; Aradi, J
2000-06-15
A polymerase chain reaction (PCR)-based radioactive telomerase assay was developed in our laboratory which is quantitative and does not require electrophoretic evaluation (designated as TP-TRAP; it utilizes two reverse primers). The main steps of the assay include (1) extension of a 20-mer oligonucleotide substrate (MTS) by telomerase, (2) amplification of the telomerase products in the presence of [(3)H]dTTP using the substrate oligonucleotide and two reverse primers (RPC3, 38 mer; RP, 20 mer), (3) isolation of the amplified radioactive dsDNA by precipitation and filtration, (4) determination of the radioactivity of the acid-insoluble DNA. The length of the telomerase products does not increase on amplification. This valuable feature of the assay is achieved by utilization of the two reverse primers and a highly specific PCR protocol. The assay is linear, accurate, and suitable for cell-biological studies where slight quantitative differences in telomerase activity must be detected. The assay is also suitable for screening and characterization of telomerase inhibitors, as shown with a chemically modified oligonucleotide reverse transcriptase inhibitor [(s(4)dU)(35)]. Copyright 2000 Academic Press.
Purdue Extension: Employee Engagement and Leadership Style
ERIC Educational Resources Information Center
Abbott, Angela R.
2017-01-01
The purpose of this quantitative study was to assess the Purdue Extension county directors' level of engagement and leadership style and to examine the relationship between these two variables. The study aimed to inform a professional development training program for all Purdue Extension county extension directors. Survey data were collected from…
Hu, Nvdan; Gong, Yulong; Wang, Xinchao; Lu, Yao; Peng, Guangyue; Yang, Long; Zhang, Shengtao; Luo, Ziping; Li, Hongru; Gao, Fang
2015-11-01
A series of new asymmetric chromophores containing aromatic substituents and possessing the excellent π-extension in space were prepared through multi-steps routes. One-photon and two-photon spectral properties of these new chromophores could be tuned by these substituents finely and simultaneously. The linear correlation of the wave numbers of the one-photon absorption and emission maxima to Hammett parameters of these substituents was presented. Near infrared two-photon absorption emission integrated areas of the target chromophores were correlated linearly to Hammett constants of these substituted groups.
Physical modeling of Tibetan bowls
NASA Astrophysics Data System (ADS)
Antunes, Jose; Inacio, Octavio
2004-05-01
Tibetan bowls produce rich penetrating sounds, used in musical contexts and to induce a state of relaxation for meditation or therapy purposes. To understand the dynamics of these instruments under impact and rubbing excitation, we developed a simulation method based on the modal approach, following our previous papers on physical modeling of plucked/bowed strings and impacted/bowed bars. This technique is based on a compact representation of the system dynamics, in terms of the unconstrained bowl modes. Nonlinear contact/friction interaction forces, between the exciter (puja) and the bowl, are computed at each time step and projected on the bowl modal basis, followed by step integration of the modal equations. We explore the behavior of two different-sized bowls, for extensive ranges of excitation conditions (contact/friction parameters, normal force, and tangential puja velocity). Numerical results and experiments show that various self-excited motions may arise depending on the playing conditions and, mainly, on the contact/friction interaction parameters. Indeed, triggering of a given bowl modal frequency mainly depends on the puja material. Computed animations and experiments demonstrate that self-excited modes spin, following the puja motion. Accordingly, the sensed pressure field pulsates, with frequency controlled by the puja spinning velocity and the spatial pattern of the singing mode.
Event time analysis of longitudinal neuroimage data.
Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce
2014-08-15
This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.
Results of step-cut medial malleolar osteotomy.
Thordarson, David B; Kaku, Shawn K
2006-12-01
Treatment of certain complex ankle pathology, such as a talar body fracture or osteochondral lesion requiring grafting, can necessitate medial malleolar osteotomy for adequate operative exposure. This paper evaluates the step-cut medial malleolar osteotomy for exposure of the ankle joint. Fourteen patients with intra-articular pathology, including talar body fractures or osteochondral lesions necessitating extensive intra-articular exposure had step-cut malleolar osteotomy. The average age of the patients was 37 (range 20-90) years, and the average followup was 8 months. All 14 patients had an uncomplicated intraoperative course, with excellent exposure of the ankle joint. All patients had prompt healing of the osteotomy by 6 weeks after surgery without loss of reduction. None of the patients had pain at the osteotomy site. Step-cut medial malleolar osteotomy is an excellent, reproducible method for extensive exposure of the talar dome.
X: a case study of a Swedish neo-Nzi and his reintegration into Swedish society.
Stern, Jessica Eve
2014-01-01
This article provides a case study of a Swedish neo-Nazi and the reintegration program being provided to him. During an extensive interview that took place over two days, he told a researcher that he was interested in having a violent adventure, and that he was drawn to Nazi symbols and history more than their creed. In comparison with ordinary crime, terrorist crime is quite rare, and access to detailed case studies is rarer still, making the development of a prospective risk-assessment instrument extremely difficult. Researchers' "thick descriptions" of their encounters with terrorists can help us to develop putative risk factors which can then be tested against controls. The article concludes by arguing that just as there is no single pathway into or out of terrorism, there can be no single reintegration program. A series of thick descriptions is a first step toward understanding what leads individuals into and out of terrorism. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Peterson, Mark
This extension education publication contains insights and tools to help community members develop a strategic vision and action plan for their community. Presented first are an executive summary and an introduction that includes 10 reasons for a strategic visioning process. The first section, which deals with harnessing the power of vision,…
Zhang, Junxiang; Kang, Lauren J; Parker, Timothy C; Blakey, Simon B; Luscombe, Christine K; Marder, Seth R
2018-04-16
Abstract : Organic electronics is a rapidly growing field driven in large part by the synthesis of ∏-conjugated molecules and polymers. Traditional aryl cross-coupling reactions such as the Stille and Suzuki have been used extensively in the synthesis of ∏-conjugated molecules and polymers, but the synthesis of intermediates necessary for traditional cross-couplings can include multiple steps with toxic and hazardous reagents. Direct arylation through C-H bond activation has the potential to reduce the number of steps and hazards while being more atom-economical. Within the Center for Selective C-H Functionalization (CCHF), we have been developing C-H activation methodology for the synthesis of ∏-conjugated materials of interest, including direct arylation of difficult-to-functionalize electron acceptor intermediates and living polymerization of ∏-conjugated polymers through C-H activation.
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1995-01-01
A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.
Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.
Cvitaš, Marko T
2018-03-13
The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.
Fuglsang, Anders
2012-08-01
This is a review of the current regulatory requirements associated with development and submission of abridged dossiers for locally acting inhalation drugs intended for the treatment of asthma and chronic obstructive pulmonary disease. The current EU law does not provide for submission of such products as generics due to the definition of bioequivalence and bioavailability; instead they must be submitted as hybrids. A guideline from 2009 is available that suggests a stepwise approach toward approval. An applicant should first consider the degree of in vitro match with the reference product; provided that the match is extensive, approval may be granted. If the in vitro match cannot be proven, the next step is comparison of lung deposition and systemic exposure. If this match is proven, approval may be granted; otherwise, the final step is pharmacodynamic evaluation. In the United States, submission as a generic is possible, but only a single specific guidance document from 1989 is in force. It describes in vitro requirements for comparison of albuterol and metaproterenol pressurized metered dose inhalers. Applicants are encouraged to seek dialogue with regulators prior to and during development. Although parallel scientific advice procedures have been established between the US Food and Drug Administration and the European Medicines Agency, the two authorities give independent and individual advice.
Generalized Models for Rock Joint Surface Shapes
Du, Shigui; Hu, Yunjin; Hu, Xiaofei
2014-01-01
Generalized models of joint surface shapes are the foundation for mechanism studies on the mechanical effects of rock joint surface shapes. Based on extensive field investigations of rock joint surface shapes, generalized models for three level shapes named macroscopic outline, surface undulating shape, and microcosmic roughness were established through statistical analyses of 20,078 rock joint surface profiles. The relative amplitude of profile curves was used as a borderline for the division of different level shapes. The study results show that the macroscopic outline has three basic features such as planar, arc-shaped, and stepped; the surface undulating shape has three basic features such as planar, undulating, and stepped; and the microcosmic roughness has two basic features such as smooth and rough. PMID:25152901
Peute, L W; Knijnenburg, S L; Kremer, L C; Jaspers, M W M
2015-01-01
The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations.
Arima, Hideyuki; Yamato, Yu; Hasegawa, Tomohiko; Kobayashi, Sho; Yoshida, Go; Yasuda, Tatsuya; Banno, Tomohiro; Oe, Shin; Mihara, Yuki; Togawa, Daisuke; Matsuyama, Yukihiro
2017-10-01
Longitudinal cohort. The present study aimed to document changes in posture and lower extremity kinematics during gait in patients with adult spinal deformity (ASD) after extensive corrective surgery. Standing radiographic parameters are typically used to evaluate patients with ASD. Previously, preoperative walking and standing posture discrepancy were reported in patients with ASD. We did not include comparison between before and after surgery. Therefore, we thought that pre- and postoperative evaluations for patients with ASD should include gait analysis. Thirty-nine patients with ASD (5 men, 34 women; mean age, 71.0 ± 6.1) who underwent posterior corrective fixation surgeries from the thoracic spine to the pelvis were included. A 4-m walk was recorded and analyzed. Sagittal balance while walking was calculated as the angle between the plumb line on the side and the line connecting the greater trochanter and pinna while walking (i.e., the gait-trunk tilt angle [GTA]). We measured maximum knee extension angle during one gait cycle, step length (cm), and walking speed (m/min). Radiographic parameters were also measured. The mean GTA and the mean maximum knee extension angle significantly improved from 13.4° to 6.4°, and -13.3° to -9.4°(P < 0.001 and P = 0.006), respectively. The mean step length improved from 40.4 to 43.1 cm (P = 0.049), but there was no significant change in walking speed (38.4 to 41.5 m/min, P = 0.105). Postoperative GTA, maximum knee extension angle and step length correlated with postoperative pelvic incidence minus lumbar lordosis (r = 0.324, P = 0.044; r = -0.317, P = 0.049; r = -0.416, P = 0.008, respectively). Our results suggest that postoperative posture, maximum knee extension angle, and step length during gait in patients with ASD improved corresponding to how much correction of the sagittal spinal deformity was achieved. 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunsperger, Heather M.; Randhawa, Tejinder; Cattolico, Rose Ann
Two non-homologous, isofunctional enzymes catalyze the penultimate step of chlorophyll a synthesis in oxygenic photosynthetic organisms such as cyanobacteria, eukaryotic algae and land plants: the light independent (LIPOR) and light-dependent (POR) protochlorophyllide oxidoreductases. Whereas the distribution of these enzymes in cyanobacteria and land plants is well understood, the presence, loss, duplication, and replacement of these genes have not been surveyed in the polyphyletic and remarkably diverse eukaryotic algal lineages.
Hunsperger, Heather M.; Randhawa, Tejinder; Cattolico, Rose Ann
2015-02-10
Two non-homologous, isofunctional enzymes catalyze the penultimate step of chlorophyll a synthesis in oxygenic photosynthetic organisms such as cyanobacteria, eukaryotic algae and land plants: the light independent (LIPOR) and light-dependent (POR) protochlorophyllide oxidoreductases. Whereas the distribution of these enzymes in cyanobacteria and land plants is well understood, the presence, loss, duplication, and replacement of these genes have not been surveyed in the polyphyletic and remarkably diverse eukaryotic algal lineages.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Spinoff Article: Automated Procedures To Improve Safety on Oil Rigs
NASA Technical Reports Server (NTRS)
Garud, Sumedha
2013-01-01
On May 11th, 2013, two astronauts emerged from the interior of the International Space Station (ISS) and worked their way toward the far end of spacecraft. Over the next 51/2 hours, the two replaced an ammonia pump that had developed a significant leak a few days before. On the ISS, ammonia serves the vital role of cooling components-in this case, one of the station's eight solar arrays. Throughout the extravehicular activity (EVA), the astronauts stayed in constant contact with mission control: every movement, every action strictly followed a carefully planned set of procedures to maximize crew safety and the chances of success. Though the leak had come as a surprise, NASA was prepared to handle it swiftly thanks in part to the thousands of procedures that have been written to cover every aspect of the ISS's operations. The ISS is not unique in this regard: Every NASA mission requires well-written procedures-or detailed lists of step-by-step instructions-that cover how to operate equipment in any scenario, from normal operations to the challenges created by malfunctioning hardware or software. Astronauts and mission control train and drill extensively in procedures to ensure they know what the proper procedures are and when they should be used. These procedures used to be exclusively written on paper, but over the past decade, NASA has transitioned to digital formats. Electronic-based documentation simplifies storage and use, allowing astronauts and flight controllers to find instructions more quickly and display them through a variety of media. Electronic procedures are also a crucial step toward automation: once instructions are digital, procedure display software can be designed to assist in authoring, reviewing, and even executing them.
Real payoffs and virtual trading in agent based market models
NASA Astrophysics Data System (ADS)
Ferreira, Fernando F.; Marsili, Matteo
2005-01-01
The -Game was recently introduced as an extension of the Minority Game. In this paper we compare this model with the well know Minority Game and the Majority Game models. Due to the inter-temporal nature of the market payoff, we introduce a two step transaction with single and mixed group of interacting traders. When the population is composed of two different group of -traders, they show an anti-imitative behavior. However, when they interact with minority or majority players the $-population imitates the usual behavior of these players. Finally we discuss how these models contribute to clarify the market mechanism.
Cooperative Extension as a Framework for Health Extension: The Michigan State University Model
Dwyer, Jeffrey W.; Contreras, Dawn; Tiret, Holly; Newkirk, Cathy; Carter, Erin; Cronk, Linda
2017-01-01
Problem The Affordable Care Act charged the Agency for Healthcare Research and Quality to create the Primary Care Extension Program, but did not fund this effort. The idea to work through health extension agents to support health care delivery systems was based on the nationally known Cooperative Extension System (CES). Instead of creating new infrastructure in health care, the CES is an ideal vehicle for increasing health-related research and primary care delivery. Approach The CES, a long-standing component of the land-grant university system, features a sustained infrastructure for providing education to communities. The Michigan State University (MSU) Model of Health Extension offers another means of developing a National Primary Care Extension Program that is replicable in part because of the presence of the CES throughout the United States. A partnership between the MSU College of Human Medicine and MSU Extension formed in 2014, emphasizing the promotion and support of human health research. The MSU Model of Health Extension includes the following strategies: building partnerships, preparing MSU Extension educators for participation in research, increasing primary care patient referrals and enrollment in health programs, and exploring innovative funding. Outcomes Since the formation of the MSU Model of Health Extension, researchers and extension professionals have made 200+ connections, and grants have afforded savings in salary costs. Next Steps The MSU College of Human Medicine and MSU Extension partnership can serve as a model to promote health partnerships nationwide between CES services within land-grant universities and academic health centers or community-based medical schools. PMID:28353501
Development of a low-cost soil moisture sensor for in-situ data collection by citizen scientists
NASA Astrophysics Data System (ADS)
Rajasekaran, E.; Jeyaram, R.; Lohrli, C.; Das, N.; Podest, E.; Hovhannesian, H.; Fairbanks, G.
2017-12-01
Soil moisture (SM) is identified as an Essential Climate Variable and it exerts a strong influence on agriculture, hydrology and land-atmosphere interaction. The aim of this project is to develop an affordable (low-cost), durable, and user-friendly, sensor and an associated mobile app to measure in-situ soil moisture by the citizen scientists or any K-12 students. The sensor essentially measures the electrical resistance between two metallic rods and the resistance is converted into SM based on soil specific calibration equations. The sensor is controlled by a micro-controller (Arduino) and a mobile app (available both for iOS and Android) reads the resistance from the micro-controller and converts it into SM for the soil type selected by the user. Extensive laboratory tests are currently being carried out to standardize the sensor and to calibrate the sensor for various soil types. The sensor will also be tested during field campaigns and recalibrated for field conditions. In addition to the development of the sensor and the mobile app, supporting documentation and videos are also being developed that show the step-by-step process of building the sensor from scratch and measurement protocols. Initial laboratory calibration and validation of the prototype suggested that the sensor is able to satisfactorily measure SM for sand, loam, sandy loam, sandy clay loam type of soils. The affordable and simple sensor will help citizen scientists to understand the dynamics of SM at their site and the in-situ data will further be utilized for validation of the satellite observations from the SMAP mission.
Xi-cam: a versatile interface for data visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Xi-cam: a versatile interface for data visualization and analysis
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke; ...
2018-05-31
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Kierkegaard, Signe; Jørgensen, Peter Bo; Dalgas, Ulrik; Søballe, Kjeld; Mechlenburg, Inger
2015-09-01
During movement tasks, patients with medial compartment knee osteoarthritis use compensatory strategies to minimise the joint load of the affected leg. Movement strategies of the knees and trunk have been investigated, but less is known about movement strategies of the pelvis during advancing functional tasks, and how these strategies are associated with leg extension power. The aim of the study was to investigate pelvic movement strategies and leg extension power in patients with end-stage medial compartment knee osteoarthritis compared with controls. 57 patients (mean age 65.6 years) scheduled for medial uni-compartmental knee arthroplasty, and 29 age and gender matched controls were included in this cross-sectional study. Leg extension power was tested with the Nottingham Leg Extension Power-Rig. Pelvic range of motion was derived from an inertia-based measurement unit placed over the sacrum bone during walking, stair climbing and stepping. Patients had lower leg extension power than controls (20-39 %, P < 0.01) and used greater pelvic range of motion during stair and step ascending and descending (P ≤ 0.03, except for pelvic range of motion in the frontal plane during ascending, P > 0.06). Furthermore, an inverse association (coefficient: -0.03 to -0.04; R (2) = 13-22 %) between leg extension power and pelvic range of motion during stair and step descending was found in the patients. Compared to controls, patients with medial compartment knee osteoarthritis use greater pelvic movements during advanced functional performance tests, particularly when these involve descending tasks. Further studies should investigate if it is possible to alter these movement strategies by an intervention aimed at increasing strength and power for the patients.
1990-08-01
the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of
Van Calster, B; Bobdiwala, S; Guha, S; Van Hoorde, K; Al-Memar, M; Harvey, R; Farren, J; Kirk, E; Condous, G; Sur, S; Stalder, C; Timmerman, D; Bourne, T
2016-11-01
A uniform rationalized management protocol for pregnancies of unknown location (PUL) is lacking. We developed a two-step triage protocol to select PUL at high risk of ectopic pregnancy (EP), based on serum progesterone level at presentation (step 1) and the serum human chorionic gonadotropin (hCG) ratio, defined as the ratio of hCG at 48 h to hCG at presentation (step 2). This was a cohort study of 2753 PUL (301 EP), involving a secondary analysis of prospectively and consecutively collected PUL data from two London-based university teaching hospitals. Using a chronological split we used 1449 PUL for development and 1304 for validation. We aimed to assign PUL as low risk with high confidence (high negative predictive value (NPV)) while classifying most EP as high risk (high sensitivity). The first triage step assigned PUL as low risk using a threshold of serum progesterone at presentation. The remaining PUL were triaged using a novel logistic regression risk model based on hCG ratio and initial serum progesterone (second step), defining low risk as an estimated EP risk of < 5%. On validation, initial serum progesterone ≤ 2 nmol/L (step 1) classified 16.1% PUL as low risk. Second-step classification with the risk model selected an additional 46.0% of all PUL as low risk. Overall, the two-step protocol classified 62.1% of PUL as low risk, with an NPV of 98.6% and a sensitivity of 92.0%. When the risk model was used in isolation (i.e. without the first step), 60.5% of PUL were classified as low risk with 99.1% NPV and 94.9% sensitivity. PUL can be classified efficiently into being either high or low risk for complications using a two-step protocol involving initial progesterone and hCG levels and the hCG ratio. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Stair ascent with an innovative microprocessor-controlled exoprosthetic knee joint.
Bellmann, Malte; Schmalz, Thomas; Ludwigs, Eva; Blumentritt, Siegmar
2012-12-01
Climbing stairs can pose a major challenge for above-knee amputees as a result of compromised motor performance and limitations to prosthetic design. A new, innovative microprocessor-controlled prosthetic knee joint, the Genium, incorporates a function that allows an above-knee amputee to climb stairs step over step. To execute this function, a number of different sensors and complex switching algorithms were integrated into the prosthetic knee joint. The function is intuitive for the user. A biomechanical study was conducted to assess objective gait measurements and calculate joint kinematics and kinetics as subjects ascended stairs. Results demonstrated that climbing stairs step over step is more biomechanically efficient for an amputee using the Genium prosthetic knee than the previously possible conventional method where the extended prosthesis is trailed as the amputee executes one or two steps at a time. There is a natural amount of stress on the residual musculoskeletal system, and it has been shown that the healthy contralateral side supports the movements of the amputated side. The mechanical power that the healthy contralateral knee joint needs to generate during the extension phase is also reduced. Similarly, there is near normal loading of the hip joint on the amputated side.
Discrimination of Chinese Sauce liquor using FT-IR and two-dimensional correlation IR spectroscopy
NASA Astrophysics Data System (ADS)
Sun, Su-Qin; Li, Chang-Wen; Wei, Ji-Ping; Zhou, Qun; Noda, Isao
2006-11-01
We applied the three-step IR macro-fingerprint identification method to obtain the IR characteristic fingerprints of so-called Chinese Sauce liquor (Moutai liquor and Kinsly liquor) and a counterfeit Moutai. These fingerprints can be used for the identification and discrimination of similar liquor products. The comparison of their conventional IR spectra, as the first step of identification, shows that the primary difference in Sauce liquor is the intensity of characteristic peaks at 1592 and 1225 cm -1. The comparison of the second derivative IR spectra, as the second step of identification, shows that the characteristic absorption in 1400-1800 cm -1 is substantially different. The comparison of 2D-IR correlation spectra, as the third and final step of identification, can discriminate the liquors from another direction. Furthermore, the method was successfully applied to the discrimination of a counterfeit Moutai from the genuine Sauce liquor. The success of the three-step IR macro-fingerprint identification to provide a rapid and effective method for the identification of Chinese liquor suggests the potential extension of this technique to the identification and discrimination of other wine and spirits, as well.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. J. Galyean; A. M. Whaley; D. L. Kelly
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less
Advancing the Public Value Movement: Sustaining Extension during Tough Times
ERIC Educational Resources Information Center
Franz, Nancy K.
2011-01-01
Extension must more fully and adeptly embrace the public value movement to be sustainable as a publicly funded organization, or our demise as an organization will continue. The public value steps outlined here and piloted with several Extension systems and national work groups can be informative for others interested in capturing and sharing the…
Gupta, Nimisha; Tripathi, Abhay Mani; Saha, Sonali; Dhinsa, Kavita; Garg, Aarti
2015-07-01
Newer development of bonding agents have gained a better understanding of factors affecting adhesion of interface between composite and dentin surface to improve longevity of restorations. The present study evaluated the influence of salivary contamination on the tensile bond strength of different generation adhesive systems (two-step etch-and-rinse, two-step self-etch and one-step self-etch) during different bonding stages to dentin where isolation is not maintained. Superficial dentin surfaces of 90 extracted human molars were randomly divided into three study Groups (Group A: Two-step etch-and-rinse adhesive system; Group B: Two-step self-etch adhesive system and Group C: One-step self-etch adhesive system) according to the different generation of adhesives used. According to treatment conditions in different bonding steps, each Group was further divided into three Subgroups containing ten teeth in each. After adhesive application, resin composite blocks were built on dentin and light cured subsequently. The teeth were then stored in water for 24 hours before sending for testing of tensile bond strength by Universal Testing Machine. The collected data were then statistically analysed using one-way ANOVA and Tukey HSD test. One-step self-etch adhesive system revealed maximum mean tensile bond strength followed in descending order by Two-step self-etch adhesive system and Two-step etch-and-rinse adhesive system both in uncontaminated and saliva contaminated conditions respectively. Unlike One-step self-etch adhesive system, saliva contamination could reduce tensile bond strength of the two-step self-etch and two-step etch-and-rinse adhesive system. Furthermore, the step of bonding procedures and the type of adhesive seems to be effective on the bond strength of adhesives contaminated with saliva.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Brion, F; Rogerieux, F; Noury, P; Migeon, B; Flammarion, P; Thybaud, E; Porcher, J M
2000-01-14
A two-step purification protocol was developed to purify rainbow trout (Oncorhynchus mykiss) vitellogenin (Vtg) and was successfully applied to Vtg of chub (Leuciscus cephalus) and gudgeon (Gobio gobio). Capture and intermediate purification were performed by anion-exchange chromatography on a Resource Q column and a polishing step was performed by gel permeation chromatography on Superdex 200 column. This method is a rapid two-step purification procedure that gave a pure solution of Vtg as assessed by silver staining electrophoresis and immunochemical characterisation.
2011-01-01
Background Objectively assessed physical performance is a strong predictor for morbidity and premature death and there is an increasing interest in the role of sarcopenia in many chronic diseases. There is a need for robust and valid functional tests in clinical practice. Therefore, the repeatability and validity of a newly developed maximal step up test (MST) was assessed. Methods The MST, assessing maximal step-up height (MSH) in 3-cm increments, was evaluated in 60 healthy middle-aged subjects, 30 women and 30 men. The repeatability of MSH and the correlation between MSH and isokinetic knee extension peak torque (IKEPT), self-reported physical function (SF-36, PF), patient demographics and self-reported physical activity were investigated. Results The repeatability between occasions and between testers was 6 cm. MSH (range 12-45 cm) was significantly correlated to IKEPT, (r = 0.68, P < 0.001), SF-36 PF score, (r = 0.29, P = 0.03), sex, age, weight and BMI. The results also show that MSH above 32 cm discriminates subjects in our study with no limitation in self-reported physical function. Conclusions The standardised MST is considered a reliable leg function test for clinical practice. The MSH was related to knee extension strength and self-reported physical function. The precision of the MST for identification of limitations in physical function needs further investigation. PMID:21854575
Welch, Vivian; Jull, J; Petkovic, J; Armstrong, R; Boyer, Y; Cuervo, L G; Edwards, Sjl; Lydiatt, A; Gough, D; Grimshaw, J; Kristjansson, E; Mbuagbaw, L; McGowan, J; Moher, D; Pantoja, T; Petticrew, M; Pottie, K; Rader, T; Shea, B; Taljaard, M; Waters, E; Weijer, C; Wells, G A; White, H; Whitehead, M; Tugwell, P
2015-10-21
Health equity concerns the absence of avoidable and unfair differences in health. Randomized controlled trials (RCTs) can provide evidence about the impact of an intervention on health equity for specific disadvantaged populations or in general populations; this is important for equity-focused decision-making. Previous work has identified a lack of adequate reporting guidelines for assessing health equity in RCTs. The objective of this study is to develop guidelines to improve the reporting of health equity considerations in RCTs, as an extension of the Consolidated Standards of Reporting Trials (CONSORT). A six-phase study using integrated knowledge translation governed by a study executive and advisory board will assemble empirical evidence to inform the CONSORT-equity extension. To create the guideline, the following steps are proposed: (1) develop a conceptual framework for identifying "equity-relevant trials," (2) assess empirical evidence regarding reporting of equity-relevant trials, (3) consult with global methods and content experts on how to improve reporting of health equity in RCTs, (4) collect broad feedback and prioritize items needed to improve reporting of health equity in RCTs, (5) establish consensus on the CONSORT-equity extension: the guideline for equity-relevant trials, and (6) broadly disseminate and implement the CONSORT-equity extension. This work will be relevant to a broad range of RCTs addressing questions of effectiveness for strategies to improve practice and policy in the areas of social determinants of health, clinical care, health systems, public health, and international development, where health and/or access to health care is a primary outcome. The outcomes include a reporting guideline (CONSORT-equity extension) for equity-relevant RCTs and a knowledge translation strategy to broadly encourage its uptake and use by journal editors, authors, and funding agencies.
Corruption and Educational Outcomes: Two Steps Forward, One Step Back
ERIC Educational Resources Information Center
Huang, Francis Lim
2008-01-01
Corruption is a problem that continues to plague developed and developing countries worldwide. Previous studies have explored the negative implications of corruption on several aspects of human development, but, despite its serious and long-lasting consequences, the impact of corruption on educational outcomes has started to receive attention only…
NASA Astrophysics Data System (ADS)
Syamsuri, B. S.; Anwar, S.; Sumarna, O.
2017-09-01
This research aims to develop oxidation-reduction reactions (redox) teaching material used the Four Steps Teaching Material Development (4S TMD) method consists of four steps: selection, structuring, characterization and didactical reduction. This paper is the first part of the development of teaching material that includes selection and structuring steps. At the selection step, the development of teaching material begins with the development concept of redox based on curriculum demands, then the development of fundamental concepts sourced from the international textbook, and last is the development of values or skills can be integrated with redox concepts. The results of this selection step are the subject matter of the redox concept and values can be integrated with it. In the structuring step was developed concept map that provide on the relationship between redox concepts; Macro structure that guide systematic on the writing of teaching material; And multiple representations which are the development of teaching material that connection between macroscopic, submicroscopic, and symbolic level representations. The result of the two steps in this first part of the study produced a draft of teaching material. Evaluation of the draft of teaching material is done by an expert lecturer in the field of chemical education to assess the feasibility of teaching material.
Modelization of three-layered polymer coated steel-strip ironing process using a neural network
NASA Astrophysics Data System (ADS)
Sellés, M. A.; Schmid, S. R.; Sánchez-Caballero, S.; Seguí, V. J.; Reig, M. J.; Pla, R.
2012-04-01
An alternative to the traditional can manufacturing process is to use plastic laminated rolled steels as base stocks. This material consist of pre-heated steel coils that are sandwiched between one or two sheets of polymer. The heated sheets are then immediately quenched, which yields a strong bond between the layers. Such polymer-coated steels were investigated by Jaworski [1,2] and Sellés [3], and found to be suitable for ironing with carefully controlled conditions. A novel multi-layer polymer coated steel has been developed for container applications. This material presents an interesting extension to previous research on polymer laminated steel in ironing, and offers several advantages over the previous material (Sellés [3]). This document shows a modelization for the ironing process (the most crucial step in can manufacturing) done by using a neural network
Focal Adhesion-Independent Cell Migration.
Paluch, Ewa K; Aspalter, Irene M; Sixt, Michael
2016-10-06
Cell migration is central to a multitude of physiological processes, including embryonic development, immune surveillance, and wound healing, and deregulated migration is key to cancer dissemination. Decades of investigations have uncovered many of the molecular and physical mechanisms underlying cell migration. Together with protrusion extension and cell body retraction, adhesion to the substrate via specific focal adhesion points has long been considered an essential step in cell migration. Although this is true for cells moving on two-dimensional substrates, recent studies have demonstrated that focal adhesions are not required for cells moving in three dimensions, in which confinement is sufficient to maintain a cell in contact with its substrate. Here, we review the investigations that have led to challenging the requirement of specific adhesions for migration, discuss the physical mechanisms proposed for cell body translocation during focal adhesion-independent migration, and highlight the remaining open questions for the future.
Task Assignment Heuristics for Parallel and Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.
Technical Assessment of the National Full Scale Aerodynamic Complex Fan Blades Repair
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Dixon, Peter G.; St.Clair, Terry L.; Johns, William E.
1998-01-01
This report describes the principal activities of a technical review team formed to address National Full Scale Aerodynamic Complex (NFAC) blade repair problems. In particular, the problem of lack of good adhesive bonding of the composite overwrap to the Hyduliginum wood blade material was studied extensively. Description of action plans and technical elements of the plans are provided. Results of experiments designed to optimize the bonding process and bonding strengths obtained on a full scale blade using a two-step cure process with adhesive primers are presented. Consensus recommendations developed by the review team in conjunction with the NASA Ames Fan Blade Repair Project Team are provided along with lessons learned on this program. Implementation of recommendations resulted in achieving good adhesive bonds between the composite materials and wooden blades, thereby providing assurance that the repaired fan blades will meet or exceed operational life requirements.
Convergance experiments with a hydrodynamic model of Port Royal Sound, South Carolina
Lee, J.K.; Schaffranek, R.W.; Baltzer, R.A.
1989-01-01
A two-demensional, depth-averaged, finite-difference, flow/transport model, SIM2D, is being used to simulate tidal circulation and transport in the Port Royal Sound, South Carolina, estuarine system. Models of a subregion of the Port Royal Sound system have been derived from an earlier-developed model of the entire system having a grid size of 600 ft. The submodels were implemented with grid sizes of 600, 300, and 150 ft in order to determine the effects of changes in grid size on computed flows in the subregion, which is characterized by narrow channels and extensive tidal flats that flood and dewater with each rise and fall of the tide. Tidal amplitudes changes less than 5 percent as the grid size was decreased. Simulations were performed with the 300-foot submodel for time steps of 60, 30, and 15 s. Study results are discussed.
Finite difference methods for transient signal propagation in stratified dispersive media
NASA Technical Reports Server (NTRS)
Lam, D. H.
1975-01-01
Explicit difference equations are presented for the solution of a signal of arbitrary waveform propagating in an ohmic dielectric, a cold plasma, a Debye model dielectric, and a Lorentz model dielectric. These difference equations are derived from the governing time-dependent integro-differential equations for the electric fields by a finite difference method. A special difference equation is derived for the grid point at the boundary of two different media. Employing this difference equation, transient signal propagation in an inhomogeneous media can be solved provided that the medium is approximated in a step-wise fashion. The solutions are generated simply by marching on in time. It is concluded that while the classical transform methods will remain useful in certain cases, with the development of the finite difference methods described, an extensive class of problems of transient signal propagating in stratified dispersive media can be effectively solved by numerical methods.
A bi-objective model for robust yard allocation scheduling for outbound containers
NASA Astrophysics Data System (ADS)
Liu, Changchun; Zhang, Canrong; Zheng, Li
2017-01-01
This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.
A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods
NASA Astrophysics Data System (ADS)
Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.
2001-01-01
In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.
Petrakis, Eleftherios A; Cagliani, Laura R; Polissiou, Moschos G; Consonni, Roberto
2015-04-15
In the present work, a preliminary study for the detection of adulterated saffron and the identification of the adulterant used by means of (1)H NMR and chemometrics is reported. Authentic Greek saffron and four typical plant-derived materials utilised as bulking agents in saffron, i.e., Crocus sativus stamens, safflower, turmeric, and gardenia were investigated. A two-step approach, relied on the application of both OPLS-DA and O2PLS-DA models to the (1)H NMR data, was adopted to perform authentication and prediction of authentic and adulterated saffron. Taking into account the deficiency of established methodologies to detect saffron adulteration with plant adulterants, the method developed resulted reliable in assessing the type of adulteration and could be viable for dealing with extensive saffron frauds at a minimum level of 20% (w/w). Copyright © 2014 Elsevier Ltd. All rights reserved.
Concept definition study for an extremely large aerophysics range facility
NASA Technical Reports Server (NTRS)
Swift, H.; Witcofski, R.
1992-01-01
The development of a large aerophysical ballistic range facility is considered to study large-scale hypersonic flows at high Reynolds numbers for complex shapes. A two-stage light gas gun is considered for the hypervelocity launcher, and the extensive range tankage is discussed with respect to blast suppression, model disposition, and the sabot impact tank. A layout is given for the large aerophysics facility, and illustrations are provided for key elements such as the guide rail. The paper shows that such a facility could be used to launch models with diameters approaching 250 mm at velocities of 6.5 km/s with peak achievable accelerations of not more than 85.0 kgs. The envisioned range would provide gas-flow facilities capable of controlling the modeled quiescent atmospheric conditions. The facility is argued to be a feasible and important step in the investigation and experiment of such hypersonic vehicles as the National Aerospace Plane.
Miranda-Molina, Alfonso; Castillo, Edmundo; Lopez Munguia, Agustin
2017-07-15
Blastose, a natural disaccharide found in honey, is usually found as a byproduct of fructo-oligosaccharide synthesis from sucrose with fructosyltransferases. In this study, we describe a novel two-step biosynthetic route to obtain blastose, designed from a detailed observation of B. subtilis levansucrase (SacB) acceptor structural requirements for fructosylation. The strategy consisted first in the synthesis of the trisaccharide O-β-d-Fruf-(2↔6)-O-α-d-Glcp-(1↔1)-α-d-Glcp, through a regioselective β-d-transfructosylation of trehalose (Tre) which acts as acceptor in a reaction catalyzed by SacB using sucrose or levan as fructosyl donor. In this reaction, levansucrase (LS) transfers regioselectively a fructosyl residue to either C 6 -OH group of the glucose residues in Tre. The resulting trisaccharide obtained in 23% molar yield based on trehalose, was purified and fully characterized by extensive NMR studies. In the second step, the trisaccharide is specifically hydrolyzed by trehalase, to obtain blastose in 43.2% molar yield based on the trisaccharide. This is the first report describing the formation of blastose through a sequential transfuctosylation-hydrolysis reaction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera
2018-01-01
To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.
Surname distribution in France: a distance analysis by a distorted geographical map.
Mourrieras, B; Darlu, P; Hochez, J; Hazout, S
1995-01-01
The distribution of surnames in 90 distinct regions in France during two successive periods, 1889-1915 and 1916-1940, is analysed from the civil birth registers of the 36,500 administrative units in France. A new approach, called 'Mobile Site Method' (MSM), is developed to allow representation of a surname distance matrix by a distorted geographical map. A surname distance matrix between the various regions in France is first calculated, then a distorted geographical map called the 'surname similarity map' is built up from the surname distances between regions. To interpret this map we draw (a) successive map contours obtained during the step-by-step distortion process, revealing zones of high surname dissimilarity, and (b) maps in grey levels representing the displacement magnitude, and allowing the segmentation of the geographical and surname maps into 'homogeneous surname zones'. By integrating geography and surname information in the same analysis, and by comparing results obtained for the two successive periods, the MSM approach produces convenient maps showing: (a) 'regionalism' of some peripheral populations such as Pays Basque, Alsace, Corsica and Brittany; (b) the presence of preferential axes of communications (Rhodanian corridor, Garonne valley); (c) barriers such as the Central Massif, Vosges; (d) the weak modifications of the distorted maps associated with the two periods studied suggest an extension (but limited) of the tendency of surname uniformity in France. These results are interpreted, in the nineteenth- and twentieth century context, as the consequences of a slow process of local migrations occurring over a long period of time.
Risk as a Resource - A New Paradigm
NASA Technical Reports Server (NTRS)
Gindorf, Thomas E.
1996-01-01
NASA must change dramatically because of the current United States federal budget climate. The American people and their elected officials have mandated a smaller, more efficient and effective government. For the past decade, NASA's budget had grown at or slightly above the rate of inflation. In that era, taking all steps to avoid the risk of failure was the rule. Spacecraft development was characterized by extensive analyses, numerous reviews, and multiple conservative tests. This methodology was consistent with the long available schedules for developing hardware and software for very large, billion dollar spacecraft. Those days are over. The time when every identifiable step was taken to avoid risk is being replaced by a new paradigm which manages risk in much the same way as other resources (schedule, performance, or dollars) are managed. While success is paramount to survival, it can no longer be bought with a large growing NASA budget.
Hydrolysis of ferric chloride in solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lussiez, G.; Beckstead, L.
1996-11-01
The Detox{trademark} process uses concentrated ferric chloride and small amounts of catalysts to oxidize organic compounds. It is under consideration for oxidizing transuranic organic wastes. Although the solution is reused extensively, at some point it will reach the acceptable limit of radioactivity or maximum solubility of the radioisotopes. This solution could be cemented, but the volume would be increased substantially because of the poor compatibility of chlorides and cement. A process has been developed that recovers the chloride ions as HCl and either minimizes the volume of radioactive waste or permits recycling of the radioactive chlorides. The process involves amore » two-step hydrolysis at atmospheric pressure, or preferably under a slight vacuum, and relatively low temperature, about 200{degrees}C. During the first step of the process, hydrolysis occurs according to the reaction below: FeCl{sub 3 liquid} + H{sub 2}O {r_arrow} FeOCl{sub solid} + 2 HCl{sub gas} During the second step, the hot, solid, iron oxychloride is sprayed with water or placed in contact with steam, and hydrolysis proceeds to the iron oxide according to the following reaction: 2 FeOCl{sub solid} + H{sub 2}O {r_arrow} Fe{sub 2}O{sub 3 solid} + 2 HCl{sub gas}. The iron oxide, which contains radioisotopes, can then be disposed of by cementation or encapsulation. Alternately, these chlorides can be washed off of the solids and can then either be recycled or disposed of in some other way.« less
Redefining the lower statistical limit in x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Marschner, M.; Birnbacher, L.; Willner, M.; Chabior, M.; Fehringer, A.; Herzen, J.; Noël, P. B.; Pfeiffer, F.
2015-03-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.
Use of ultra-lightweight geofoam to reduce stresses in highway culvert extensions.
DOT National Transportation Integrated Search
2005-10-01
Culvert extension under highway embankment construction is a regular and important practice when roadway widening occurs. At some existing sites, concrete thickness and reinforcing steel of culvert tops and walls were stepped-down in sections of the ...
A Two-Step Approach for Producing an Ultrafine-Grain Structure in Cu-30Zn Brass (Postprint)
2015-08-13
crystallization anneal at 400 °C (0.55Tm, where Tm is the melting point ) for times ranging from 1 min to 10 hours, followed by water quenching; an additional...200 words) A two-step approach involving cryogenic rolling and subsequent recrystallization annealing was developed to produce an ultrafine-grain...b s t r a c t A two-step approach involving cryogenic rolling and subsequent recrystallization annealing was devel- oped to produce an ultrafine
Friedrichs systems in a Hilbert space framework: Solvability and multiplicity
NASA Astrophysics Data System (ADS)
Antonić, N.; Erceg, M.; Michelangeli, A.
2017-12-01
The Friedrichs (1958) theory of positive symmetric systems of first order partial differential equations encompasses many standard equations of mathematical physics, irrespective of their type. This theory was recast in an abstract Hilbert space setting by Ern, Guermond and Caplain (2007), and by Antonić and Burazin (2010). In this work we make a further step, presenting a purely operator-theoretic description of abstract Friedrichs systems, and proving that any pair of abstract Friedrichs operators admits bijective extensions with a signed boundary map. Moreover, we provide sufficient and necessary conditions for existence of infinitely many such pairs of spaces, and by the universal operator extension theory (Grubb, 1968) we get a complete identification of all such pairs, which we illustrate on two concrete one-dimensional examples.
Zhang, Jian-Guo; Ohta, Toshiki; Ishikawa-Takata, Kazuko; Tabata, Izumi; Miyashita, Mitsumasa
2003-09-01
The relationships among walk steps, exercise habits and peak oxygen consumption (VO2peak), ventilatory threshold (VT) and leg extension power (LEP) were examined in 709 apparently healthy Japanese subjects (male 372, female 337) aged 30-69 years. Walk steps were evaluated using a pedometer. VO2peak and VT were assessed by a cycle ergometer test, while LEP was measured with an isokinetic leg extension system (Combi, Anaero Press 3500, Japan). Subjects who participated in exercise three times or more a week demonstrated significantly greater VO2peak and VT when compared with subjects without exercise habits. When a separate analysis was conducted on subjects who exercised fewer than three times per week, we found that the subgroup with the highest number of walk steps showed significantly greater VT in all male subjects and female subjects aged 30-49 years, but a significantly greater VO2peak only in females aged 30-49 years, when compared to the subgroup with the fewest walk steps. These results suggest that although some people exercise less than three times a week, if they are quite active in daily life, such activities might also confer benefits upon their fitness.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Purification of adenoviral vectors by combined anion exchange and gel filtration chromatography.
Eglon, Marc N; Duffy, Aoife M; O'Brien, Timothy; Strappe, Padraig M
2009-11-01
Adenoviral vectors are used extensively in human gene therapy trials and in vaccine development. Large-scale GMP production requires a downstream purification process, and liquid chromatography is emerging as the most powerful mode of purification, enabling the production of vectors at a clinically relevant scale and quality. The present study describes the development of a two-step high-performance liquid chromatography (HPLC) process combining anion exchange (AIEX) and gel filtration (GF) in comparison with the caesium chloride density gradient method. HEK-293 cells were cultured in ten-layer CellStacks() and infected with 10 pfu/cell of adenoviral vector expressing green fluorescent protein (Ad5-GFP). Cell-bound virus was harvested and benzonase added to digest DNA, crude lysate was clarified by centrifugation and filtration prior to HPLC. Chromatography fractions were added to HEK-293 cells and GFP expression measured using a fluorescent plate reader. Using AIEX then GF resulted in an adenoviral vector with purity comparable to Ad5-GFP purified by CsCl, whereas the reverse process (GF-AIEX) showed a reduced purity by electrophoresis and required further buffer exchange of the product. The optimal process (AIEX-GF) resulted in a vector yield of 2.3 x 10(7) pfu/cm(2) of cell culture harvested compared to 3.3 x 10(7) pfu/cm(2) for CsCl. The process recovery for the HPLC process was 36% compared to 27.5% for CsCl and total virion to infectious particle ratios of 18 and 11, respectively, were measured. We present a simple two-step chromatography process that is capable of producing high-quality adenovirus at a titre suitable for scale-up and clinical translation.
First steps of processing VLBI data of space probes with VieVS
NASA Astrophysics Data System (ADS)
Plank, L.; Böhm, J.; Schuh, H.
2011-07-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics (IGG) of the Vienna University of Technology has developed the Vienna VLBI Software VieVS which is capable to process geodetic VLBI data in NGS format. Constantly we are working on upgrading the new software, e.g. by developing a scheduling tool or extending the software from single session solution to a so-called global solution, allowing the joint analysis of many sessions covering several years. In this presentation we report on first steps to enable the processing of space VLBI data with the software. Driven by the recently increasing number of space VLBI applications, our goal is the geodetic usage of such data, primarily concerning frame ties between various reference frames, e. g. by connecting the dynamic reference frame of a space probe with the kinematically defined International Celestial Reference Frame (ICRF). Main parts of the software extension w.r.t. the existing VieVS are the treatment of fast moving targets, the implementation of a delay model for radio emitters at finite distances, and the adequate mathematical model and adjustment of the particular unknowns. Actual work has been done for two mission scenarios so far: On the one hand differential VLBI (D-VLBI) data from the two sub-satellites of the Japanese lunar mission Selene were processed, on the other hand VLBI observations of GNSS satellites were modelled in VieVS. Besides some general aspects, we give details on the calculation of the theoretical delay (delay model for moving sources at finite distances) and its realization in VieVS. First results with real data and comparisons with best fit mission orbit data are also presented.'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandewouw, Marlee M., E-mail: marleev@mie.utoronto
Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less
A SiQuENC for solving physics problems
NASA Astrophysics Data System (ADS)
Liao, David
2018-04-01
Students often struggle in AP Physics 1 because they have not been previously trained to develop qualitative arguments. Extensive literature on multiple representations and qualitative reasoning provides strategies to address this challenge. Table I presents three examples, including SiQuENC, which I adapted from a strategy promoted by Etkina et al. To remind students that they can use qualitative reasoning (e.g., arguing from proportionalities), rather than relying only on algebra, I replaced "Solve" with "Analyze." I added a "Communicate" step to guide planning of written responses to AP Physics 1 and 2 questions. To perform this step, draw a circled number around each key point identified in figures, equations, and sentence fragments. Then, convert numbered points into sentences.
Filling the gap: Developing health economics competencies for baccalaureate nursing programs.
Platt, Maia; Kwasky, Andrea; Spetz, Joanne
2016-01-01
The need for greater involvement of the nursing profession in cost containment efforts has been documented extensively. More thorough education of nurses in the subject of health economics (HE) is one of the factors that could contribute toward achievement of that goal. The project's main contribution is the development of the unique list of essential HE competencies for baccalaureate nursing students. The proposed competencies were developed and validated using the protocol by Lynn (1986) for two-stage content validation of psychometric instruments. An additional validation step that included a nationwide survey of nurse administrators was conducted to measure the value they place on the health economics-related skills and knowledge of their employees. A set of six HE competencies was developed. Their validity was unanimously approved by the panel of five experts and additionally supported by the survey results (with individual competencies' approval rates of 67% or higher). The incorporation of economic thinking into the nationwide standards of baccalaureate nursing education, and professional nursing competencies, will enhance the capacity of the nursing workforce to lead essential change in the delivery of high-value affordable health care nationwide. Copyright © 2016 Elsevier Inc. All rights reserved.
Relationship between Bruxism and Malocclusion among Preschool Children in Isfahan
Ghafournia, Maryam; Hajenourozali Tehrani, Maryam
2012-01-01
Background and aims Bruxism is defined as a habitual nonfunctional forceful contact between occlusal tooth surfaces. In younger children bruxism may be a consequence of the masticatory neuromuscular system immaturity. The aim of this study was to assess the prevalence of bruxism and investigate the relationship between occlusal factors and bruxism among preschool children. Materials and methods In this cross-sectional survey, 400 3-6-year-old children were selected randomly from different preschools in Isfahan, Iran. The subjects were divided into two groups of bruxers and non-bruxers as determined by the clinical examination and their parents’ reports. The examiner recorded the primary canines (Class I, Class II, and Class III) and molars (mesial step, distal step, flash terminal plane) relationship, existence of anterior and posterior crossbite, open and deep bite. Also, rotated teeth, food impaction, sharp tooth edges, high restorations, extensive tooth caries, and painful teeth (categorized as irritating tooth conditions) were evaluated. The relationship between bruxism and occlusal factors and irritating tooth conditions was evaluated with chi-square test. Results Bruxism was seen in 12.75% of the subjects. Statistically significant relationships existed between bruxism and some occlusal factors, such as flash terminal plane (P = 0.023) and mesial step (P = 0.001) and also, between food impaction, extensive tooth caries, tooth pain, sharp tooth edge and bruxism. Conclusion The results showed significant relationship of bruxism with primary molar relationships and irritating tooth conditions among preschool children. PMID:23277860
Multidimensional FEM-FCT schemes for arbitrary time stepping
NASA Astrophysics Data System (ADS)
Kuzmin, D.; Möller, M.; Turek, S.
2003-05-01
The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.
Portus, Marc R; Lloyd, David G; Elliott, Bruce C; Trama, Neil L
2011-05-01
The measurement of lumbar spine motion is an important step for injury prevention research during complex and high impact activities, such as cricket fast bowling or javelin throwing. This study examined the performance of two designs of a lumbar rig, previously used in gait research, during a controlled high impact bench jump task. An 8-camera retro-reflective motion analysis system was used to track the lumbar rig. Eleven athletes completed the task wearing the two different lumbar rig designs. Flexion extension data were analyzed using a fast Fourier transformation to assess the signal power of these data during the impact phase of the jump. The lumbar rig featuring an increased and pliable base of support recorded moderately less signal power through the 0-60 Hz spectrum, with statistically less magnitudes at the 0-5 Hz (p = .039), 5-10 Hz (p = .005) and 10-20 Hz (p = .006) frequency bins. A lumbar rig of this design would seem likely to provide less noisy lumbar motion data during high impact tasks.
Colony patterning and collective hyphal growth of filamentous fungi
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2002-11-01
Colony morphology of wild and mutant strains of Aspergillus nidulans at various nutrient and agar levels was investigated. Two types of colony patterning were found for these strains. One type produced uniform colonies at all nutrient and agar levels tested, and the other exhibited morphological change into disordered ramified colonies at low nutrient levels. Both types showed highly condensed compact colonies at high nutrient levels on low agar media that was highly diffusive. Disordered colonies were found to develop with low hyphal extension rates at low nutrient levels. To understand basic pattern selection rules, a colony model with three parameters, i.e., the initial nutrient level and the step length of nutrient random walk as the external parameters, and the frequency of nutrient uptake as an internal parameter, was constructed. At low nutrient levels, with decreasing nutrient uptake frequency under diffusive conditions, the model colony exhibited onsets of disordered ramification. Further, in the growth process of A. nidulans, reduction of hyphal extension rate due to a population effect of hyphae was found when hyphae form three-dimensional dense colonies, as compared to the case in which hyphal growth was restricted into two-dimensional space. A hyphal population effect was introduced in the colony model. Thickening of colony periphery due to the population effect became distinctive as the nutrient diffusion effect was raised at high nutrient levels with low hyphal growth rate. It was considered that colony patterning and onset of disorder were strongly governed by the combination of nutrient diffusion and hyphal growth rate.
Li, Yu; Zhang, Xiangru; Yang, Mengting; Liu, Jiaqi; Li, Wanxin; Graham, Nigel J D; Li, Xiaoyan; Yang, Bo
2017-02-01
Chlorination is extensively applied for disinfecting sewage effluents, but it unintentionally generates disinfection byproducts (DBPs). Using seawater for toilet flushing introduces a high level of bromide into domestic sewage. Chlorination of sewage effluent rich in bromide causes the formation of brominated DBPs. The objectives of achieving a disinfection goal, reducing disinfectant consumption and operational costs, as well as diminishing adverse effects to aquatic organisms in receiving water body remain a challenge in sewage treatment. In this study, we have demonstrated that, with the same total chlorine dosage, a three-step chlorination (dosing chlorine by splitting it into three equal portions with a 5-min time interval for each portion) was significantly more efficient in disinfecting a primary saline sewage effluent than a one-step chlorination (dosing chlorine at one time). Compared to one-step chlorination, three-step chlorination enhanced the disinfection efficiency by up to 0.73-log reduction of Escherichia coli. The overall DBP formation resulting from one-step and three-step chlorination was quantified by total organic halogen measurement. Compared to one-step chlorination, the DBP formation in three-step chlorination was decreased by up to 23.4%. The comparative toxicity of one-step and three-step chlorination was evaluated in terms of the development of embryo-larva of a marine polychaete Platynereis dumerilii. The results revealed that the primary sewage effluent with three-step chlorination was less toxic than that with one-step chlorination, indicating that three-step chlorination could reduce the potential adverse effects of disinfected sewage effluents to aquatic organisms in the receiving marine water. Copyright © 2016 Elsevier Ltd. All rights reserved.
Individual Colorimetric Observer Model
Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent
2016-01-01
This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
Single-particle stochastic heat engine.
Rana, Shubhashis; Pal, P S; Saha, Arnab; Jayannavar, A M
2014-10-01
We have performed an extensive analysis of a single-particle stochastic heat engine constructed by manipulating a Brownian particle in a time-dependent harmonic potential. The cycle consists of two isothermal steps at different temperatures and two adiabatic steps similar to that of a Carnot engine. The engine shows qualitative differences in inertial and overdamped regimes. All the thermodynamic quantities, including efficiency, exhibit strong fluctuations in a time periodic steady state. The fluctuations of stochastic efficiency dominate over the mean values even in the quasistatic regime. Interestingly, our system acts as an engine provided the temperature difference between the two reservoirs is greater than a finite critical value which in turn depends on the cycle time and other system parameters. This is supported by our analytical results carried out in the quasistatic regime. Our system works more reliably as an engine for large cycle times. By studying various model systems, we observe that the operational characteristics are model dependent. Our results clearly rule out any universal relation between efficiency at maximum power and temperature of the baths. We have also verified fluctuation relations for heat engines in time periodic steady state.
ERIC Educational Resources Information Center
Srisinghasongkram, Pornchada; Pruksananonda, Chandhita; Chonchaiya, Weerasak
2016-01-01
This study aimed to validate the use of two-step Modified Checklist for Autism in Toddlers (M-CHAT) screening adapted for a Thai population. Our participants included both high-risk children with language delay (N = 109) and low-risk children with typical development (N = 732). Compared with the critical scoring criteria, the total scoring method…
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
Weidinger, G; Wolke, U; Köprunner, M; Klinger, M; Raz, E
1999-12-01
In many organisms, the primordial germ cells have to migrate from the position where they are specified towards the developing gonad where they generate gametes. Extensive studies of the migration of primordial germ cells in Drosophila, mouse, chick and Xenopus have identified somatic tissues important for this process and demonstrated a role for specific molecules in directing the cells towards their target. In zebrafish, a unique situation is found in that the primordial germ cells, as marked by expression of vasa mRNA, are specified in random positions relative to the future embryonic axis. Hence, the migrating cells have to navigate towards their destination from various starting positions that differ among individual embryos. Here, we present a detailed description of the migration of the primordial germ cells during the first 24 hours of wild-type zebrafish embryonic development. We define six distinct steps of migration bringing the primordial germ cells from their random positions before gastrulation to form two cell clusters on either side of the midline by the end of the first day of development. To obtain information on the origin of the positional cues provided to the germ cells by somatic tissues during their migration, we analyzed the migration pattern in mutants, including spadetail, swirl, chordino, floating head, cloche, knypek and no isthmus. In mutants with defects in axial structures, paraxial mesoderm or dorsoventral patterning, we find that certain steps of the migration process are specifically affected. We show that the paraxial mesoderm is important for providing proper anteroposterior information to the migrating primordial germ cells and that these cells can respond to changes in the global dorsoventral coordinates. In certain mutants, we observe accumulation of ectopic cells in different regions of the embryo. These ectopic cells can retain both morphological and molecular characteristics of primordial germ cells, suggesting that, in zebrafish at the early stages tested, the vasa-expressing cells are committed to the germ cell lineage.
NASA Technical Reports Server (NTRS)
1979-01-01
A plan for the production of two PEP flight systems is defined. The task's milestones are described. Provisions for the development and assembly of new ground support equipment required for both testing and launch operations are included.
Growing Community Capacity in Energy Development through Extension Education
ERIC Educational Resources Information Center
Romich, Eric; Bowen-Elzey, Nancy
2013-01-01
New energy policy, industry regulation, and market investment are influencing the development of renewable energy technologies, setting the stage for rural America to provide the energy of tomorrow. This article describes how Extension's renewable energy programming was implemented in two Ohio communities to engage elected officials and residents…
Design of Miniaturized Dual-Band Microstrip Antenna for WLAN Application
Yang, Jiachen; Wang, Huanling; Lv, Zhihan; Wang, Huihui
2016-01-01
Wireless local area network (WLAN) is a technology that combines computer network with wireless communication technology. The 2.4 GHz and 5 GHz frequency bands in the Industrial Scientific Medical (ISM) band can be used in the WLAN environment. Because of the development of wireless communication technology and the use of the frequency bands without the need for authorization, the application of WLAN is becoming more and more extensive. As the key part of the WLAN system, the antenna must also be adapted to the development of WLAN communication technology. This paper designs two new dual-frequency microstrip antennas with the use of electromagnetic simulation software—High Frequency Structure Simulator (HFSS). The two antennas adopt ordinary FR4 material as a dielectric substrate, with the advantages of low cost and small size. The first antenna adopts microstrip line feeding, and the antenna radiation patch is composed of a folded T-shaped radiating dipole which reduces the antenna size, and two symmetrical rectangular patches located on both sides of the T-shaped radiating patch. The second antenna is a microstrip patch antenna fed by coaxial line, and the size of the antenna is diminished by opening a stepped groove on the two edges of the patch and a folded slot inside the patch. Simulation experiments prove that the two designed antennas have a higher gain and a favourable transmission characteristic in the working frequency range, which is in accordance with the requirements of WLAN communication. PMID:27355954
Design of Miniaturized Dual-Band Microstrip Antenna for WLAN Application.
Yang, Jiachen; Wang, Huanling; Lv, Zhihan; Wang, Huihui
2016-06-27
Wireless local area network (WLAN) is a technology that combines computer network with wireless communication technology. The 2.4 GHz and 5 GHz frequency bands in the Industrial Scientific Medical (ISM) band can be used in the WLAN environment. Because of the development of wireless communication technology and the use of the frequency bands without the need for authorization, the application of WLAN is becoming more and more extensive. As the key part of the WLAN system, the antenna must also be adapted to the development of WLAN communication technology. This paper designs two new dual-frequency microstrip antennas with the use of electromagnetic simulation software-High Frequency Structure Simulator (HFSS). The two antennas adopt ordinary FR4 material as a dielectric substrate, with the advantages of low cost and small size. The first antenna adopts microstrip line feeding, and the antenna radiation patch is composed of a folded T-shaped radiating dipole which reduces the antenna size, and two symmetrical rectangular patches located on both sides of the T-shaped radiating patch. The second antenna is a microstrip patch antenna fed by coaxial line, and the size of the antenna is diminished by opening a stepped groove on the two edges of the patch and a folded slot inside the patch. Simulation experiments prove that the two designed antennas have a higher gain and a favourable transmission characteristic in the working frequency range, which is in accordance with the requirements of WLAN communication.
Spruyt, Karen; Gozal, David
2010-01-01
Questionnaires are a useful and extensively used tool in clinical sleep medicine and in sleep research. The number of sleep questionnaires targeting the pediatric age range has tremendously increased in recent years, and with such explosion in the number of instruments, their heterogeneity has become all the more apparent. Here, we explore the theoretical and pragmatic processes required for instrument design and development, i.e., how any questionnaire, inventory, log, or diary should be created and evaluated, and also provide illustrative examples to further underline the potential pitfalls that are inherently embedded in every step of tool development. PMID:20952230
A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.
Kang, Xiaomin; Huang, Baoqi; Qi, Guodong
2018-01-19
Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.
Subgroup conflicts? Try the psychodramatic "double triad method".
Verhofstadt-Denève, Leni M F
2012-04-01
The present article suggests the application of a psychodramatic action method for tackling subgroup conflicts in which the direct dialogue between representatives of two opposing subgroups is prepared step by step through an indirect dialogue strategy within two triads, a strategy known as the Double Triad Method (DTM). In order to achieve integration in the group as a whole, it is important that all the members of both subgroups participate actively during the entire process. The first part of the article briefly explores the theoretical background, with a special emphasis on the Phenomenological-Dialectical Personality Model (Phe-Di PModel). In the second part, the DTM procedure is systematically described through its five action stages, each accompanied with 1) a spatial representation of the consecutive actions, 2) some illustrative statements for each stage, and 3) a theoretical interpretation of the dialectically involved personality dimensions in both protagonists. The article concludes with a discussion and suggestions for more extensive applications of the DTM method, including the question of its relationships to Agazarian's functional subgrouping, psychodrama, and sociodrama.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Multidomain proteins under force
NASA Astrophysics Data System (ADS)
Valle-Orero, Jessica; Andrés Rivas-Pardo, Jaime; Popa, Ionel
2017-04-01
Advancements in single-molecule force spectroscopy techniques such as atomic force microscopy and magnetic tweezers allow investigation of how domain folding under force can play a physiological role. Combining these techniques with protein engineering and HaloTag covalent attachment, we investigate similarities and differences between four model proteins: I10 and I91—two immunoglobulin-like domains from the muscle protein titin, and two α + β fold proteins—ubiquitin and protein L. These proteins show a different mechanical response and have unique extensions under force. Remarkably, when normalized to their contour length, the size of the unfolding and refolding steps as a function of force reduces to a single master curve. This curve can be described using standard models of polymer elasticity, explaining the entropic nature of the measured steps. We further validate our measurements with a simple energy landscape model, which combines protein folding with polymer physics and accounts for the complex nature of tandem domains under force. This model can become a useful tool to help in deciphering the complexity of multidomain proteins operating under force.
Deciphering the influence of the thermal processes on the early passive margins formation
NASA Astrophysics Data System (ADS)
Bousquet, Romain; Nalpas, Thierry; Ballard, Jean-François; Ringenbach, Jean-Claude; Chelalou, Roman; Clerc, Camille
2015-04-01
Many large-scale dynamic processes, from continental rifting to plate subduction, are intimately linked to metamorphic reactions. This close relation between geodynamic processes and metamorphic reactions is, in spite of appearances, yet poorly understood. For example, during extension processes, rocks will be exposed to important temperature, pressures and stress changes. Meanwhile less attention has been paid to other important aspects of the metamorphic processes. When reacting rocks expand and contract, density and volume changes will set up in the surrounding material. While several tectonic models are proposed to explain the formation of extensive basins and passive margins ( simple shear detachment mantle exhumation .... ) a single thermal model (McKenzie , 1978), as a dogma, is used to understanding and modeling the formation and evolution of sedimentary basins . This model is based on the assumption that the extension is only by pure shear and it is instantaneous. Under this approach, the sedimentary deposits occur in two stages. i) A short step , 1 to 10 Ma , controlled by tectonics. ii) A longer step , at least 50 Ma as a result of the thermal evolution of the lithosphere. However, most stratigraphic data indicate that less thermal model can account for documented vertical movements. The study of the thermal evolution , coupled with other tectonic models , and its consequences have never been studied in detail , although the differences may be significant and it is clear that the petrological changes associated with changes in temperature conditions , influence changes reliefs. In addition, it seems that the relationship between basin formation and thermal evolution is not always the same: - Sometimes the temperature rise above 50 to 100 Ma tectonic extension. In the Alps, a significant rise in geothermal gradient Permo -Triassic followed by a "cold" extension , leading to the opening of the Ligurian- Piedmont ocean, from the Middle Jurassic . - Other examples show that temperature changes are synchronous with basin formation . For example, extensive ponds Cretaceous North Pyrenean clearly indicate that the "cooking" of contemporary sediment deposit. In the light of new models, we discuss the consequences of the formation of LP-granulites during rifting on deformation and the subsidence processes.
The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet
ERIC Educational Resources Information Center
Hill, Paul; Rader, Heidi B.; Hino, Jeff
2012-01-01
For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…
Automatic detection of red lesions in digital color fundus photographs.
Niemeijer, Meindert; van Ginneken, Bram; Staal, Joes; Suttorp-Schulten, Maria S A; Abràmoff, Michael D
2005-05-01
The robust detection of red lesions in digital color fundus photographs is a critical step in the development of automated screening systems for diabetic retinopathy. In this paper, a novel red lesion detection method is presented based on a hybrid approach, combining prior works by Spencer et al. (1996) and Frame et al. (1998) with two important new contributions. The first contribution is a new red lesion candidate detection system based on pixel classification. Using this technique, vasculature and red lesions are separated from the background of the image. After removal of the connected vasculature the remaining objects are considered possible red lesions. Second, an extensive number of new features are added to those proposed by Spencer-Frame. The detected candidate objects are classified using all features and a k-nearest neighbor classifier. An extensive evaluation was performed on a test set composed of images representative of those normally found in a screening set. When determining whether an image contains red lesions the system achieves a sensitivity of 100% at a specificity of 87%. The method is compared with several different automatic systems and is shown to outperform them all. Performance is close to that of a human expert examining the images for the presence of red lesions.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A two-step method for developing a control rod program for boiling water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taner, M.S.; Levine, S.H.; Hsiao, M.Y.
1992-01-01
This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Appelbaum, Liat; Sosna, Jacob; Pearson, Robert; Perez, Sarah; Nissenbaum, Yizhak; Mertyna, Pawel; Libson, Eugene; Goldberg, S Nahum
2010-02-01
To prospectively optimize multistep algorithms for largest available multitined radiofrequency (RF) electrode system in ex vivo and in vivo tissues, to determine best energy parameters to achieve large predictable target sizes of coagulation, and to compare these algorithms with manufacturer's recommended algorithms. Institutional animal care and use committee approval was obtained for the in vivo portion of this study. Ablation (n = 473) was performed in ex vivo bovine liver; final tine extension was 5-7 cm. Variables in stepped-deployment RF algorithm were interrogated and included initial current ramping to 105 degrees C (1 degrees C/0.5-5.0 sec), the number of sequential tine extensions (2-7 cm), and duration of application (4-12 minutes) for final two to three tine extensions. Optimal parameters to achieve 5-7 cm of coagulation were compared with recommended algorithms. Optimal settings for 5- and 6-cm final tine extensions were confirmed in in vivo perfused bovine liver (n = 14). Multivariate analysis of variance and/or paired t tests were used. Mean RF ablation zones of 5.1 cm +/- 0.2 (standard deviation), 6.3 cm +/- 0.4, and 7 cm +/- 0.3 were achieved with 5-, 6-, and 7-cm final tine extensions in a mean of 19.5 min +/- 0.5, 27.9 min +/- 6, and 37.1 min +/- 2.3, respectively, at optimal settings. With these algorithms, size of ablation at 6- and 7-cm tine extension significantly increased from mean of 5.4 cm +/- 0.4 and 6.1 cm +/- 0.6 (manufacturer's algorithms) (P <.05, both comparisons); two recommended tine extensions were eliminated. In vivo confirmation produced mean diameter in specified time: 5.5 cm +/- 0.4 in 18.5 min +/- 0.5 (5-cm extensions) and 5.7 cm +/- 0.2 in 21.2 min +/- 0.6 (6-cm extensions). Large zones of coagulation of 5-7 cm can be created with optimized RF algorithms that help reduce number of tine extensions compared with manufacturer's recommendations. Such algorithms are likely to facilitate the utility of these devices for RF ablation of focal tumors in clinical practice. (c) RSNA, 2010.
The STEP (Safety and Toxicity of Excipients for Paediatrics) database: part 2 - the pilot version.
Salunke, Smita; Brandys, Barbara; Giacoia, George; Tuleu, Catherine
2013-11-30
The screening and careful selection of excipients is a critical step in paediatric formulation development as certain excipients acceptable in adult formulations, may not be appropriate for paediatric use. While there is extensive toxicity data that could help in better understanding and highlighting the gaps in toxicity studies, the data are often scattered around the information sources and saddled with incompatible data types and formats. This paper is the second in a series that presents the update on the Safety and Toxicity of Excipients for Paediatrics ("STEP") database being developed by Eu-US PFIs, and describes the architecture data fields and functions of the database. The STEP database is a user designed resource that compiles the safety and toxicity data of excipients that is scattered over various sources and presents it in one freely accessible source. Currently, in the pilot database data from over 2000 references/10 excipients presenting preclinical, clinical, regulatory information and toxicological reviews, with references and source links. The STEP database allows searching "FOR" excipients and "BY" excipients. This dual nature of the STEP database, in which toxicity and safety information can be searched in both directions, makes it unique from existing sources. If the pilot is successful, the aim is to increase the number of excipients in the existing database so that a database large enough to be of practical research use will be available. It is anticipated that this source will prove to be a useful platform for data management and data exchange of excipient safety information. Copyright © 2013 Elsevier B.V. All rights reserved.
Jank, Louise; Martins, Magda Targa; Arsand, Juliana Bazzan; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara
2015-01-01
This study describes the development and validation procedures for scope extension of a method for the determination of β-lactam antibiotic residues (ampicillin, amoxicillin, penicillin G, penicillin V, oxacillin, cloxacillin, dicloxacillin, nafcillin, ceftiofur, cefquinome, cefoperazone, cephapirine, cefalexin and cephalonium) in bovine milk. Sample preparation was performed by liquid-liquid extraction (LLE) followed by two clean-up steps, including low temperature purification (LTP) and a solid phase dispersion clean-up. Extracts were analysed using a liquid chromatography-electrospray-tandem mass spectrometry system (LC-ESI-MS/MS). Chromatographic separation was performed in a C18 column, using methanol and water (both with 0.1% of formic acid) as mobile phase. Method validation was performed according to the criteria of Commission Decision 2002/657/EC. Main validation parameters such as linearity, limit of detection, decision limit (CCα), detection capability (CCβ), accuracy, and repeatability were determined and were shown to be adequate. The method was applied to real samples (more than 250) and two milk samples had levels above maximum residues limits (MRLs) for cloxacillin - CLX and cefapirin - CFAP.
Inhibition of the archaeal beta-class (Cab) and gamma-class (Cam) carbonic anhydrases.
Zimmerman, Sabrina A; Ferry, James G; Supuran, Claudiu T
2007-01-01
Five independently evolved classes (alpha-, beta-, gamma-, delta-, zeta-) of carbonic anhydrases facilitate the reversible hydration of carbon dioxide to bicarbonate of which the alpha-class is the most extensively studied. Detailed inhibition studies of the alpha-class with the two main classes of inhibitors, sulfonamides and metal-complexing anions, revealed many inhibitors that are used as therapeutic agents to prevent and treat many diseases. Recent inhibitor studies of the archaeal beta-class (Cab) and the gamma-class (Cam) carbonic anhydrases show differences in inhibition response to sulfonamides and metal-complexing anions, when compared to the alpha-class carbonic anhydrases. In addition, inhibition between Cab and Cam differ. These inhibition patterns are consistent with the idea that although, alpha-, beta-, and gamma-class carbonic anhydrases participate in the same two-step isomechanism, diverse active site architecture among these classes predicts variations on the catalytic mechanism. These inhibitor studies of the archaeal beta- and gamma-class carbonic anhydrases give insight to new applications of current day carbonic anhydrase inhibitors, as well as direct research to develop new compounds that may be specific inhibitors of prokaryotic carbonic anhydrases.
Complex igneous processes and the formation of the primitive lunar crustal rocks
NASA Technical Reports Server (NTRS)
Longhi, J.; Boudreau, A. E.
1979-01-01
Crystallization of a magma ocean with initial chondritic Ca/Al and REE ratios such as proposed by Taylor and Bence (TB, 1975), is capable of producing the suite of primitive crustal rocks if the magma ocean underwent locally extensive assimilation and mixing in its upper layers as preliminary steps in formation of an anorthositic crust. Lunar anorthosites were the earliest permanent crustal rocks to form the result of multiple cycles of suspension and assimilation of plagioclase in liquids fractionating olivine and pyroxene. There may be two series of Mg-rich cumulate rocks: one which developed as a result of the equilibration of anorthositic crust with the magma ocean; the other which formed in the later stages of the magma ocean during an epoch of magma mixing and ilmenite crystallization. This second series may be related to KREEP genesis. It is noted that crystallization of the magma ocean had two components: a low pressure component which produced a highly fractionated and heterogeneous crust growing downward and a high pressure component which filled in the ocean from the bottom up, mostly with olivine and low-Ca pyroxene.
On the use and computation of the Jordan canonical form in system theory
NASA Technical Reports Server (NTRS)
Sridhar, B.; Jordan, D.
1974-01-01
This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.
Depicting the logic of three evaluation theories.
Hansen, Mark; Alkin, Marvin C; Wallace, Tanner Lebaron
2013-06-01
Here, we describe the development of logic models depicting three theories of evaluation practice: Practical Participatory (Cousins & Whitmore, 1998), Values-engaged (Greene, 2005a, 2005b), and Emergent Realist (Mark et al., 1998). We begin with a discussion of evaluation theory and the particular theories that were chosen for our analysis. We then outline the steps involved in constructing the models. The theoretical prescriptions and claims represented here follow a logic model template developed at the University Wisconsin-Extension (Taylor-Powell & Henert, 2008), which also closely aligns with Mark's (2008) framework for research on evaluation. Copyright © 2012 Elsevier Ltd. All rights reserved.
A novel adaptive, real-time algorithm to detect gait events from wearable sensors.
Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona
2015-05-01
A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.
Parents Who Care: A Step-by-Step Guide for Families with Teens. [Video Included].
ERIC Educational Resources Information Center
Hawkins, J. David; Catalano, Richard F.
The world can be a risky place for teenagers in the 1990s. This guide and videotape provide skills for parents who want to help teens move successfully through the steps from childhood to adulthood. Based on extensive research, each of the seven units in the guide includes advice, strategies, and activities for both parents and teens to improve…
Designing and Introducing Ethical Dilemmas into Computer-Based Business Simulations
ERIC Educational Resources Information Center
Schumann, Paul L.; Scott, Timothy W.; Anderson, Philip H.
2006-01-01
This article makes two contributions to the teaching of business ethics literature. First, it describes the steps involved in developing effective ethical dilemmas to incorporate into a computer-based business simulation. Second, it illustrates these steps by presenting two ethical dilemmas that an instructor can incorporate into any business…
ERIC Educational Resources Information Center
Petty, John T.
1996-01-01
Presents an extension of the change in oxidation number method that is used for balancing skeletal redox reactions in aqueous solutions. Retains most of the simplicity of the change in oxidation number method but provides the additional step-by-step process necessary for the beginner to balance an equation. (JRH)
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-05-14
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.
Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang
2018-01-01
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS’s solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method. PMID:29757983
Lavery, Richard; Zakrzewska, Krystyna; Beveridge, David; Bishop, Thomas C.; Case, David A.; Cheatham, Thomas; Dixit, Surjit; Jayaram, B.; Lankas, Filip; Laughton, Charles; Maddocks, John H.; Michon, Alexis; Osman, Roman; Orozco, Modesto; Perez, Alberto; Singh, Tanya; Spackova, Nada; Sponer, Jiri
2010-01-01
It is well recognized that base sequence exerts a significant influence on the properties of DNA and plays a significant role in protein–DNA interactions vital for cellular processes. Understanding and predicting base sequence effects requires an extensive structural and dynamic dataset which is currently unavailable from experiment. A consortium of laboratories was consequently formed to obtain this information using molecular simulations. This article describes results providing information not only on all 10 unique base pair steps, but also on all possible nearest-neighbor effects on these steps. These results are derived from simulations of 50–100 ns on 39 different DNA oligomers in explicit solvent and using a physiological salt concentration. We demonstrate that the simulations are converged in terms of helical and backbone parameters. The results show that nearest-neighbor effects on base pair steps are very significant, implying that dinucleotide models are insufficient for predicting sequence-dependent behavior. Flanking base sequences can notably lead to base pair step parameters in dynamic equilibrium between two conformational sub-states. Although this study only provides limited data on next-nearest-neighbor effects, we suggest that such effects should be analyzed before attempting to predict the sequence-dependent behavior of DNA. PMID:19850719
The Development of the strain in dementia care scale (SDCS).
Edberg, Anna-Karin; Anderson, Katrina; Orrung Wallin, Anneli; Bird, Mike
2015-12-01
Though many staff gain satisfaction from working with people with dementia in residential facilities, they also experience significant stress. This is a serious issue because this in turn can affect the quality of care. There is, however, a lack of instruments to measure staff strain in the dementia-specific residential care environment, and the aim of this study, accordingly, was to develop the "Strain in Dementia Care Scale." The instrument was developed in three steps. In the first step, items were derived from six focus group discussions with 35 nurses in the United Kingdom, Australia, and Sweden concerning their experience of strain. In the second step, a preliminary 64-item scale was distributed to 927 dementia care staff in Australia and Sweden, which, based on exploratory factor analysis, resulted in a 29-item scale. In the final step, the 29-item scale was distributed to a new sample of 346 staff in Sweden, and the results were subjected to confirmatory factor analysis. The final scale comprised the following 27 items producing a five-factor solution: Frustrated empathy; difficulties understanding and interpreting; balancing competing needs; balancing emotional involvement; and lack of recognition. The scale can be used (a) as an outcome measurement in residential care intervention studies; (b) to help residential facilities identify interventions needed to improve staff well-being, and, by extension, those they care for; and (c) to generally make more salient the critical issue of staff strain and the importance of ameliorating it.
Molecular simulations of lipid systems: Edge stability and structure in pure and mixed bilayers
NASA Astrophysics Data System (ADS)
Jiang, Yong
2007-12-01
Understanding the structural, mechanical and dynamical properties of lipid self-assembled systems is fundamental to understand the behavior of the cell membrane. This thesis has investigated the equilibrium properties of lipid systems with edge defects through various molecular simulation techniques. The overall goal of this study is to understand the free energy terms of the edges and to develop efficient methods to sample equilibrium distributions of mixed-lipid systems. In the first main part of my thesis, an atomistic molecular model is used to study lipid ribbon which has two edges on both sides. Details of the edge structures, such as area per lipid and tail torsional statistics are presented. Line tension, calculated from pressure tensor in MD simulation has good agreement with result from other sources. To further investigate edge properties on a longer timescale and larger length scale, we have applied a coarse-grained forcefield on mixed lipid systems and try to interpret the edge fluctuations in terms of free energy parameters such as line tension and bending modulus. We have identified two regimes with quite different edge behavior: a high line tension regime and a low line tension regime. The last part of this thesis focuses on a hybrid Molecular dynamics and Configurational-bias Monte Carlo (MCMD) simulation method in which molecules can change their type by growing and shrinking the terminal acyl united carbon atoms. A two-step extension of the MCMD method has been developed to allow for a larger difference in the components' tail lengths. Results agreed well with previous one-step mutation results for a mixture with a length difference of four carbons. The current method can efficiently sample mixtures with a length difference of eight carbons, with a small portion of lipids of intermediate tail length. Preliminary results are obtained for "bicelle"-type (DMPC/DHPC) ribbons.
78 FR 40485 - Lung Cancer Patient-Focused Drug Development; Extension of Comment Period
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-05
... patients' perspectives for the two main types of lung cancer (small-cell and non-small cell lung cancer) on..., because of lung cancer? (Examples may include sleeping through the night, climbing stairs, household...] Lung Cancer Patient-Focused Drug Development; Extension of Comment Period AGENCY: Food and Drug...
Navier-Stokes Computations With One-Equation Turbulence Model for Flows Along Concave Wall Surfaces
NASA Technical Reports Server (NTRS)
Wang, Chi R.
2005-01-01
This report presents the use of a time-marching three-dimensional compressible Navier-Stokes equation numerical solver with a one-equation turbulence model to simulate the flow fields developed along concave wall surfaces without and with a downstream extension flat wall surface. The 3-D Navier- Stokes numerical solver came from the NASA Glenn-HT code. The one-equation turbulence model was derived from the Spalart and Allmaras model. The computational approach was first calibrated with the computations of the velocity and Reynolds shear stress profiles of a steady flat plate boundary layer flow. The computational approach was then used to simulate developing boundary layer flows along concave wall surfaces without and with a downstream extension wall. The author investigated the computational results of surface friction factors, near surface velocity components, near wall temperatures, and a turbulent shear stress component in terms of turbulence modeling, computational mesh configurations, inlet turbulence level, and time iteration step. The computational results were compared with existing measurements of skin friction factors, velocity components, and shear stresses of the developing boundary layer flows. With a fine computational mesh and a one-equation model, the computational approach could predict accurately the skin friction factors, near surface velocity and temperature, and shear stress within the flows. The computed velocity components and shear stresses also showed the vortices effect on the velocity variations over a concave wall. The computed eddy viscosities at the near wall locations were also compared with the results from a two equation turbulence modeling technique. The inlet turbulence length scale was found to have little effect on the eddy viscosities at locations near the concave wall surface. The eddy viscosities, from the one-equation and two-equation modeling, were comparable at most stream-wise stations. The present one-equation turbulence model is an effective approach for turbulence modeling in the near solid wall surface region of flow over a concave wall.
A digital repository with an extensible data model for biobanking and genomic analysis management.
Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi
2014-01-01
Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.
A digital repository with an extensible data model for biobanking and genomic analysis management
2014-01-01
Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
NASA HERMeS Hall Thruster Electrical Configuration Characterization
NASA Technical Reports Server (NTRS)
Peterson, Peter; Kamhawi, Hani; Huang, Wensheng; Yim, John; Herman, Daniel; Williams, George; Gilland, James; Hofer, Richard
2016-01-01
NASAs Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kW Technology Demonstration Unit-1 (TDU-1) Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight ready propulsion system. Part of the technology maturation was to test the TDU-1 thruster in several ground based electrical configurations to assess the thruster robustness and suitability to successful in-space operation. The ground based electrical configuration testing has recently been demonstrated as an important step in understanding and assessing how a Hall thruster may operate differently in space compared to ground based testing, and to determine the best configuration to conduct development and qualification testing. This presentation will cover the electrical configuration testing of the TDU-1 HERMeS Hall thruster in NASA Glenn Research Centers Vacuum Facility 5. The three electrical configurations examined are the thruster body tied to facility ground, thruster floating, and finally the thruster body electrically tied to cathode common. The TDU-1 HERMeS was configured with two different exit plane boundary conditions, dielectric and conducting, to examine the influence on the electrical configuration characterization.
NASA HERMeS Hall Thruster Electrical Configuration Characterization
NASA Technical Reports Server (NTRS)
Peterson, Peter Y.; Kamhawi, Hani; Huang, Wensheng; Yim, John; Herman, Daniel; Williams, George; Gilland, James; Hofer, Richard
2015-01-01
The NASA Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kW Technology Demonstration Unit-1 (TDU-1) Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight ready propulsion system. Part of the technology maturation was to test the TDU-1 thruster in several ground based electrical configurations to assess the thruster robustness and suitability to successful in-space operation. The ground based electrical configuration testing has recently been demonstrated as an important step in understanding and assessing how a Hall thruster may operate differently in-space compared to ground based testing, and to determine the best configuration to conduct development and qualification testing. This paper describes the electrical configuration testing of the HERMeS TDU-1 Hall thruster in NASA Glenn Research Center's Vacuum Facility 5. The three electrical configurations examined were 1) thruster body tied to facility ground, 2) thruster floating, and 3) thruster body electrically tied to cathode common. The HERMeS TDU-1 Hall thruster was also configured with two different exit plane boundary conditions, dielectric and conducting, to examine the influence on the electrical configuration characterization.
In silico design of ligand triggered RNA switches.
Findeiß, Sven; Hammer, Stefan; Wolfinger, Michael T; Kühnl, Felix; Flamm, Christoph; Hofacker, Ivo L
2018-04-13
This contribution sketches a work flow to design an RNA switch that is able to adapt two structural conformations in a ligand-dependent way. A well characterized RNA aptamer, i.,e., knowing its K d and adaptive structural features, is an essential ingredient of the described design process. We exemplify the principles using the well-known theophylline aptamer throughout this work. The aptamer in its ligand-binding competent structure represents one structural conformation of the switch while an alternative fold that disrupts the binding-competent structure forms the other conformation. To keep it simple we do not incorporate any regulatory mechanism to control transcription or translation. We elucidate a commonly used design process by explicitly dissecting and explaining the necessary steps in detail. We developed a novel objective function which specifies the mechanistics of this simple, ligand-triggered riboswitch and describe an extensive in silico analysis pipeline to evaluate important kinetic properties of the designed sequences. This protocol and the developed software can be easily extended or adapted to fit novel design scenarios and thus can serve as a template for future needs. Copyright © 2018. Published by Elsevier Inc.
Munasinghe, M Nalaka; Stephen, Craig; Abeynayake, Preeni; Abeygunawardena, Indra S
2010-08-12
Shrimp farming has great potential to diversify and secure income in rural Sri Lanka, but production has significantly declined in recent years due to civil conflicts, some unsustainable practices and devastating outbreaks of disease. We examined management practices affecting disease prevention and control in the Puttalam district to identify extension services outputs that could support sustainable development of Sri Lankan shrimp farming. A survey on 621 shrimp farms (603 operational and 18 nonoperational) was conducted within the Puttalam district over 42 weeks comprising a series of three-day field visits from August 2008 to October 2009, covering two consecutive shrimp crops. Fundamental deficits in disease control, management, and biosecurity practices were found. Farmers had knowledge of biosecurity but the lack of financial resources was a major impediment to improved disease control. Smallholder farmers were disproportionately constrained in their ability to enact basic biosecurity practices due to their economic status. Basic breaches in biosecurity will keep disease as the rate limiting step in this industry. Plans to support this industry must recognize the socioeconomic reality of rural Sri Lankan aquaculture.
Corrigan, Patrick W; Schomerus, Georg; Shuman, Valery; Kraus, Dana; Perlick, Debbie; Harnish, Autumn; Kulesza, Magdalena; Kane-Willis, Kathleen; Qin, Sang; Smelson, David
2017-01-01
Although advocates and providers identify stigma as a major factor in confounding the recovery of people with SUDs, research on addiction stigma is lacking, especially when compared to the substantive literature examining the stigma of mental illness. A comprehensive review of the stigma literature that yielded empirically supported concepts and methods from the mental health arena was contrasted with the much smaller and mostly descriptive findings from the addiction field. In Part I of this two part paper (American Journal of Addictions, Vol 26, pages 59-66, this issue), constructs and methods from the mental health stigma literature were used to summarize research that seeks to understand the phenomena of addiction stigma. In Paper II, we use this summary, as well as the extensive literature on mental illness stigma change, to outline a research program to develop and evaluate strategies meant to diminish impact on public and self-stigma (eg, education and contact). The paper ends with recommendations for next steps in addiction stigma research. (Am J Addict 2017;26:67-74). © 2016 American Academy of Addiction Psychiatry.
Multi-step Variable Height Photolithography for Valved Multilayer Microfluidic Devices.
Brower, Kara; White, Adam K; Fordyce, Polly M
2017-01-27
Microfluidic systems have enabled powerful new approaches to high-throughput biochemical and biological analysis. However, there remains a barrier to entry for non-specialists who would benefit greatly from the ability to develop their own microfluidic devices to address research questions. Particularly lacking has been the open dissemination of protocols related to photolithography, a key step in the development of a replica mold for the manufacture of polydimethylsiloxane (PDMS) devices. While the fabrication of single height silicon masters has been explored extensively in literature, fabrication steps for more complicated photolithography features necessary for many interesting device functionalities (such as feature rounding to make valve structures, multi-height single-mold patterning, or high aspect ratio definition) are often not explicitly outlined. Here, we provide a complete protocol for making multilayer microfluidic devices with valves and complex multi-height geometries, tunable for any application. These fabrication procedures are presented in the context of a microfluidic hydrogel bead synthesizer and demonstrate the production of droplets containing polyethylene glycol (PEG diacrylate) and a photoinitiator that can be polymerized into solid beads. This protocol and accompanying discussion provide a foundation of design principles and fabrication methods that enables development of a wide variety of microfluidic devices. The details included here should allow non-specialists to design and fabricate novel devices, thereby bringing a host of recently developed technologies to their most exciting applications in biological laboratories.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-01-01
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984
Background oriented schlieren in a density stratified fluid.
Verso, Lilly; Liberzon, Alex
2015-10-01
Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.
Campos, Samuel K; Barry, Michael A
2004-11-01
There are extensive efforts to develop cell-targeting adenoviral vectors for gene therapy wherein endogenous cell-binding ligands are ablated and exogenous ligands are introduced by genetic means. Although current approaches can genetically manipulate the capsid genes of adenoviral vectors, these approaches can be time-consuming and require multiple steps to produce a modified viral genome. We present here the use of the bacteriophage lambda Red recombination system as a valuable tool for the easy and rapid construction of capsid-modified adenoviral genomes.
Freeform Optics: current challenges for future serial production
NASA Astrophysics Data System (ADS)
Schindler, C.; Köhler, T.; Roth, E.
2017-10-01
One of the major developments in optics industry recently is the commercial manufacturing of freeform surfaces for optical mid- and high performance systems. The loss of limitation on rotational symmetry enables completely new optical design solutions - but causes completely new challenges for the manufacturer too. Adapting the serial production from radial-symmetric to freeform optics cannot be done just by the extension of machine capabilities and software for every process step. New solutions for conventional optics productions or completely new process chains are necessary.
Border preserving skin lesion segmentation
NASA Astrophysics Data System (ADS)
Kamali, Mostafa; Samei, Golnoosh
2008-03-01
Melanoma is a fatal cancer with a growing incident rate. However it could be cured if diagnosed in early stages. The first step in detecting melanoma is the separation of skin lesion from healthy skin. There are particular features associated with a malignant lesion whose successful detection relies upon accurately extracted borders. We propose a two step approach. First, we apply K-means clustering method (to 3D RGB space) that extracts relatively accurate borders. In the second step we perform an extra refining step for detecting the fading area around some lesions as accurately as possible. Our method has a number of novelties. Firstly as the clustering method is directly applied to the 3D color space, we do not overlook the dependencies between different color channels. In addition, it is capable of extracting fine lesion borders up to pixel level in spite of the difficulties associated with fading areas around the lesion. Performing clustering in different color spaces reveals that 3D RGB color space is preferred. The application of the proposed algorithm to an extensive data-base of skin lesions shows that its performance is superior to that of existing methods both in terms of accuracy and computational complexity.
Are Shorter Versions of the Positive and Negative Syndrome Scale (PANSS) Doable? A Critical Review.
Lindenmayer, Jean-Pierre
2017-12-01
The Positive and Negative Syndrome Scale (PANSS) is a well-established assessment tool for measuring symptom severity in schizophrenia. Researchers and clinicians have been interested in the development of a short version of the PANSS that could reduce the burden of its administration for patients and raters. The author presents a comprehensive overview of existing brief PANSS measures, including their strengths and limitations, and discusses some possible next steps. There are two available scales that offer a reduced number of original PANSS items: PANSS-14 and PANSS-19; and two shorter versions that include six items: Brief PANSS and PANSS-6. The PANSS-6 has been tested quite extensively in established trials and appears to demonstrate high sensitivity to change and an established cut off definition for remission. Prospective testing in new antipsychotic treatment trials is still required for these shorter versions of PANSS. In addition, they need to be supplemented with interview guides, as well as provide conversion formulas to translate total scores from the short PANSS versions to the PANSS-30. Both short versions of the PANSS are essentially designed to evaluate response to antipsychotic treatment. Future PANSS scale development needs to address specific measurement of treatment-responsive positive symptoms by including treatment-sensitive items, as well as illness-phase specific PANSS tools.
NASA Astrophysics Data System (ADS)
Shih, D.; Yeh, G.
2009-12-01
This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.
van Aert, Robbie C M; Jackson, Dan
2018-04-26
A wide variety of estimators of the between-study variance are available in random-effects meta-analysis. Many, but not all, of these estimators are based on the method of moments. The DerSimonian-Laird estimator is widely used in applications, but the Paule-Mandel estimator is an alternative that is now recommended. Recently, DerSimonian and Kacker have developed two-step moment-based estimators of the between-study variance. We extend these two-step estimators so that multiple (more than two) steps are used. We establish the surprising result that the multistep estimator tends towards the Paule-Mandel estimator as the number of steps becomes large. Hence, the iterative scheme underlying our new multistep estimator provides a hitherto unknown relationship between two-step estimators and Paule-Mandel estimator. Our analysis suggests that two-step estimators are not necessarily distinct estimators in their own right; instead, they are quantities that are closely related to the usual iterative scheme that is used to calculate the Paule-Mandel estimate. The relationship that we establish between the multistep and Paule-Mandel estimator is another justification for the use of the latter estimator. Two-step and multistep estimators are perhaps best conceptualized as approximate Paule-Mandel estimators. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Su, Xiaoye; Liang, Ruiting; Stolee, Jessica A
2018-06-05
Oligonucleotides are being researched and developed as potential drug candidates for the treatment of a broad spectrum of diseases. The characterization of antisense oligonucleotide (ASO) impurities caused by base mutations (e.g. deamination) which are closely related to the target ASO is a significant analytical challenge. Herein, we describe a novel one-step method, utilizing a strategy that combines fluorescence-ON detection with competitive hybridization, to achieve single base mutation quantitation in extensively modified synthetic ASOs. Given that this method is highly specific and sensitive (LoQ = 4 nM), we envision that it will find utility for screening other impurities as well as sequencing modified oligonucleotides. Copyright © 2018 Elsevier B.V. All rights reserved.
Major achievements of evidence-based traditional Chinese medicine in treating major diseases.
Chao, Jung; Dai, Yuntao; Verpoorte, Robert; Lam, Wing; Cheng, Yung-Chi; Pao, Li-Heng; Zhang, Wei; Chen, Shilin
2017-09-01
A long history of use and extensive documentation of the clinical practices of traditional Chinese medicine resulted in a considerable number of classical preparations, which are still widely used. This heritage of our ancestors provides a unique resource for drug discovery. Already, a number of important drugs have been developed from traditional medicines, which in fact form the core of Western pharmacotherapy. Therefore, this article discusses the differences in drug development between traditional medicine and Western medicine. Moreover, the article uses the discovery of artemisinin as an example that illustrates the "bedside-bench-bedside" approach to drug discovery to explain that the middle way for drug development is to take advantage of the best features of these two distinct systems and compensate for certain weaknesses in each. This article also summarizes evidence-based traditional medicines and discusses quality control and quality assessment, the crucial steps in botanical drug development. Herbgenomics may provide effective tools to clarify the molecular mechanism of traditional medicines in the botanical drug development. The totality-of-the-evidence approach used by the U.S. Food and Drug Administration for botanical products provides the directions on how to perform quality control from the field throughout the entire production process. Copyright © 2017 Elsevier Inc. All rights reserved.
Characterization of TALE genes expression during the first lineage segregation in mammalian embryos.
Sonnet, Wendy; Rezsöhazy, Rene; Donnay, Isabelle
2012-11-01
Three amino acid loop extension (TALE) homeodomain-containing transcription factors are generally recognized for their role in organogenesis and differentiation during embryogenesis. However, very little is known about the expression and function of Meis, Pbx, and Prep genes during early development. In order to determine whether TALE proteins could contribute to the early cell fate decisions in mammalian development, this study aimed to characterize in a systematic manner the pattern of expression of all Meis, Pbx, and Prep genes from the precompaction to blastocyst stage corresponding to the first step of cell differentiation in mammals. To reveal to what extent TALE genes expression at these early stages is a conserved feature among mammals, this study was performed in parallel in the bovine and mouse models. We demonstrated the transcription and translation of TALE genes, before gastrulation in the two species. At least one member of Meis, Pbx, and Prep subfamilies was found expressed at the RNA and protein levels but different patterns of expression were observed between genes and between species, suggesting specific gene regulations. Taken together, these results suggest a previously unexpected involvement of these factors during the early development in mammals. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nelson, Andrew
2010-11-01
The efficient use of complex neutron scattering instruments is often hindered by the complex nature of their operating software. This complexity exists at each experimental step: data acquisition, reduction and analysis, with each step being as important as the previous. For example, whilst command line interfaces are powerful at automated acquisition they often reduce accessibility by novice users and sometimes reduce the efficiency for advanced users. One solution to this is the development of a graphical user interface which allows the user to operate the instrument by a simple and intuitive "push button" approach. This approach was taken by the Motofit software package for analysis of multiple contrast reflectometry data. Here we describe the extension of this package to cover the data acquisition and reduction steps for the Platypus time-of-flight neutron reflectometer. Consequently, the complete operation of an instrument is integrated into a single, easy to use, program, leading to efficient instrument usage.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
3D Image Analysis of Geomaterials using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Mulukutla, G.; Proussevitch, A.; Sahagian, D.
2009-05-01
Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.
Object-oriented software for evaluating measurement uncertainty
NASA Astrophysics Data System (ADS)
Hall, B. D.
2013-05-01
An earlier publication (Hall 2006 Metrologia 43 L56-61) introduced the notion of an uncertain number that can be used in data processing to represent quantity estimates with associated uncertainty. The approach can be automated, allowing data processing algorithms to be decomposed into convenient steps, so that complicated measurement procedures can be handled. This paper illustrates the uncertain-number approach using several simple measurement scenarios and two different software tools. One is an extension library for Microsoft Excel®. The other is a special-purpose calculator using the Python programming language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, H.M.; Eskin, N.A.M.; Pinsky, C.
Potato polyphenol oxidase activity was strongly and noncompetitively inhibited by the 'Perov mixture' of coal tar components and by pyridine alone, while phenol competitively inhibited the enzyme. These two inhibitors are structural components of the parkinsonogenic neurotoxin N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). By extension, dopamine and neuromelanin synthesis in the brain may be influenced by the inhibitory effects of such compounds upon the copper-dependent steps of tyrosine metabolism. The non-animal model used in this study may represent an alternative to the use of animal tissues in neurodegenerative disease research.
NASA Astrophysics Data System (ADS)
Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo
2016-12-01
Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
UCP1 in adipose tissues: two steps to full browning.
Kalinovich, Anastasia V; de Jong, Jasper M A; Cannon, Barbara; Nedergaard, Jan
2017-03-01
The possibility that brown adipose tissue thermogenesis can be recruited in order to combat the development of obesity has led to a high interest in the identification of "browning agents", i.e. agents that increase the amount and activity of UCP1 in brown and brite/beige adipose tissues. However, functional analysis of the browning process yields confusingly different results when the analysis is performed in one of two alternative steps. Thus, in one of the steps, using cold acclimation as a potent model browning agent, we find that if the browning process is followed in mice initially housed at 21 °C (the most common procedure), there is only weak molecular evidence for increases in UCP1 gene expression or UCP1 protein abundance in classical brown adipose tissue; however, in brite/beige adipose depots, there are large increases, apparently associating functional browning with events only in the brite/beige tissues. Contrastingly, in another step, if the process is followed starting with mice initially housed at 30 °C (thermoneutrality for mice, thus similar to normal human conditions), large increases in UCP1 gene expression and UCP1 protein abundance are observed in the classical brown adipose tissue depots; there is then practically no observable UCP1 gene expression in brite/beige tissues. This apparent conundrum can be resolved when it is realized that the classical brown adipose tissue at 21 °C is already essentially fully differentiated and thus expands extensively through proliferation upon further browning induction, rather than by further enhancing cellular differentiation. When the limiting factor for thermogenesis, i.e. the total amount of UCP1 protein per depot, is analyzed, classical brown adipose tissue is by far the predominant site for the browning process, irrespective of which of the two steps is analyzed. There are to date no published data demonstrating that alternative browning agents would selectively promote brite/beige tissues versus classical brown tissue to a higher degree than does cold acclimation. Thus, to restrict investigations to examine adipose tissue depots where only a limited part of the adaptation process occurs (i.e. the brite/beige tissues) and to use initial conditions different from the thermoneutrality normally experienced by adult humans may seriously hamper the identification of therapeutically valid browning agents. The data presented here have therefore important implications for the analysis of the potential of browning agents and the nature of human brown adipose tissue. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B; Faulds, James E
Detailed geologic analyses have elucidated the kinematics, stress state, structural controls, and past surface activity of a blind geothermal system in Emerson Pass on the Pyramid Lake Paiute Reservation, western Nevada. The Emerson Pass area resides near the boundary of the Basin and Range and Walker Lane provinces and at the western edge of a broad left step or relay ramp between the north- to north-northeast-striking, west-dipping, Fox and Lake Range normal faults. The step-over provides a structurally favorable setting for deep circulation of meteoric fluids. Strata in the area are comprised of late Miocene to Pliocene sedimentary rocks andmore » the middle Miocene Pyramid sequence mafic to intermediate volcanic rocks, all overlying Mesozoic metasedimentary and intrusive rocks. A thermal anomaly was discovered in Emerson Pass by use of 2-m temperature surveys deployed within a structurally favorable setting and proximal to surface features indicative of geothermal activity. The 2-m temperature surveys define a north-south elongate thermal anomaly that has a maximum recorded temperature of ~60°C and resides on a north- to north-northeast-striking normal fault. Although the active geothermal system is expressed solely as a soil heat anomaly, late Pleistocene travertine and tufa mounds, chalcedonic silica/calcite veins, and silica cemented Pleistocene lacustrine gravels indicate a robust geothermal system was active at the surface in the recent past. The geothermal system is controlled primarily by the broad step-over between two major range-bounding normal faults. In detail, the system likely results from enhanced permeability generated by the intersection of two oppositely dipping, southward terminating north- to north-northwest-striking (Fox Range fault) and north-northeast-striking normal faults. Structural complexity and spatial heterogeneities of the strain and stress field have developed in the step-over region, but kinematic data suggest a west-northwest-trending (~280° azimuth) extension direction. Therefore, geothermal activity in the Emerson Pass area is probably hosted on north-to north-northeast striking normal faults.« less
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmin, G; Lee, C; Lee, C
Purpose: Recent advances in cancer treatments have greatly increased the likelihood of post-treatment patient survival. Secondary malignancies, however, have become a growing concern. Epidemiological studies determining secondary effects in radiotherapy patients require assessment of organ-specific dose both inside and outside the treatment field. An essential input for Monte Carlo modeling of particle transport is radiological images showing full patient anatomy. However, in retrospective studies it is typical to only have partial anatomy from CT scans used during treatment planning. In this study, we developed a multi-step method to extend such limited patient anatomy to full body anatomy for estimating dosemore » to normal tissues located outside the CT scan coverage. Methods: The first step identified a phantom from a library of body size-dependent computational human phantoms by matching the height and weight of patients. Second, a Python algorithm matched the patient CT coverage location in relation to the whole body phantom. Third, an algorithm cut the whole body phantom and scaled them to match the size of the patient. Then, merged the two anatomies into one whole body. We entitled this new approach, Anatomically Predictive Extension (APE). Results: The APE method was examined by comparing the original chest-abdomen-pelvis CT images of the five patients with the APE phantoms developed from only the chest part of the CAP images and whole body phantoms. We achieved average percent differences of tissue volumes of 25.7%, 34.2%, 16.5%, 26.8%, and 31.6% with an average of 27% across all patients. Conclusion: Our APE method extends the limited CT patient anatomy to whole body anatomy by using image processing and computational human phantoms. Our ongoing work includes evaluating the accuracy of these APE phantoms by comparing normal tissue doses in the APE phantoms and doses calculated for the original full CAP images under generic radiotherapy simulations. This research was supported by the NIH Intramural Research Program.« less
Estimation in a semi-Markov transformation model
Dabrowska, Dorota M.
2012-01-01
Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583
Calculation of the recirculating compressible flow downstream a sudden axisymmetric expansion
NASA Technical Reports Server (NTRS)
Vandromme, D.; Haminh, H.; Brunet, H.
1988-01-01
Significant progress has been made during the last five years to adapt conventional Navier-Stokes solver for handling nonconservative equations. A primary type of application is to use transport equation turbulence models, but the extension is also possible for describing the transport of nonpassive scalars, such as in reactive media. Among others, combustion and gas dissociation phenomena are topics needing a considerable research effort. An implicit two step scheme based on the well-known MacCormack scheme has been modified to treat compressible turbulent flows on complex geometries. Implicit treatment of nonconservative equations (in the present case a two-equation turbulence model) opens the way to the coupled solution of thermochemical transport equations.
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian
2018-02-01
This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.
Electronic Procedures for Medical Operations
NASA Technical Reports Server (NTRS)
2015-01-01
Electronic procedures are replacing text-based documents for recording the steps in performing medical operations aboard the International Space Station. S&K Aerospace, LLC, has developed a content-based electronic system-based on the Extensible Markup Language (XML) standard-that separates text from formatting standards and tags items contained in procedures so they can be recognized by other electronic systems. For example, to change a standard format, electronic procedures are changed in a single batch process, and the entire body of procedures will have the new format. Procedures can be quickly searched to determine which are affected by software and hardware changes. Similarly, procedures are easily shared with other electronic systems. The system also enables real-time data capture and automatic bookmarking of current procedure steps. In Phase II of the project, S&K Aerospace developed a Procedure Representation Language (PRL) and tools to support the creation and maintenance of electronic procedures for medical operations. The goal is to develop these tools in such a way that new advances can be inserted easily, leading to an eventual medical decision support system.
One-step colloidal synthesis of biocompatible water-soluble ZnS quantum dot/chitosan nanoconjugates
NASA Astrophysics Data System (ADS)
Ramanery, Fábio P.; Mansur, Alexandra AP; Mansur, Herman S.
2013-12-01
Quantum dots (QDs) are luminescent semiconductor nanocrystals with great prospective for use in biomedical and environmental applications. Nonetheless, eliminating the potential cytotoxicity of the QDs made with heavy metals is still a challenge facing the research community. Thus, the aim of this work was to develop a novel facile route for synthesising biocompatible QDs employing carbohydrate ligands in aqueous colloidal chemistry with optical properties tuned by pH. The synthesis of ZnS QDs capped by chitosan was performed using a single-step aqueous colloidal process at room temperature. The nanobioconjugates were extensively characterised by several techniques, and the results demonstrated that the average size of ZnS nanocrystals and their fluorescent properties were influenced by the pH during the synthesis. Hence, novel 'cadmium-free' biofunctionalised systems based on ZnS QDs capped by chitosan were successfully developed exhibiting luminescent activity that may be used in a large number of possible applications, such as probes in biology, medicine and pharmacy.
Knijnenburg, S.L.; Kremer, L.C.; Jaspers, M.W.M.
2015-01-01
Summary Background The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). Objective To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. Methods The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. Results The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Conclusion Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations. PMID:26171083
Evaluating the Impact of Cooperative Extension Outreach via Twitter
ERIC Educational Resources Information Center
O'Neill, Barbara
2014-01-01
Twitter is increasingly being used by Extension educators as a teaching and program-marketing tool. It is not enough, however, to simply use Twitter to disseminate information. Steps must be taken to evaluate program impact with quantitative and qualitative data. This article described the following Twitter evaluation metrics: unique hashtags,…
Assessment of Farmer-Oriented Agricultural Extension Intervention in Iran
ERIC Educational Resources Information Center
Mohammadzadeh, Latif; Sadighi, Hassan; Abbasi, Enayat
2017-01-01
Purpose: The main purpose of this study was to determine the characteristics of farmer-oriented policies as regards the Iranian agricultural extension system. Methodology: To fulfill this objective, a Delphi technique was utilized. The study used a series of three steps, engaging a panel of experts on farmer-oriented policies of agricultural…
The Wiki as a Time-Saving Mentoring Tool
ERIC Educational Resources Information Center
Kinsey, Joanne; Carleo, Jenny; O'Neill, Barbara; Polanin, Nicholas
2010-01-01
An important step in the acculturation of new Extension professionals is a mentoring process that includes the input of experienced Extension colleagues. The wiki is a technology tool that can be useful by providing an online venue for Mentor Team communication and a place to share articles, curricula, and other critical tenure documents. This…
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Late Paleogene rifting along the Malay Peninsula thickened crust
NASA Astrophysics Data System (ADS)
Sautter, Benjamin; Pubellier, Manuel; Jousselin, Pierre; Dattilo, Paolo; Kerdraon, Yannick; Choong, Chee Meng; Menier, David
2017-07-01
Sedimentary basins often develop above internal zones of former orogenic belts. We hereafter consider the Malay Peninsula (Western Sunda) as a crustal high separating two regions of stretched continental crust; the Andaman/Malacca basins in the western side and the Thai/Malay basins in the east. Several stages of rifting have been documented thanks to extensive geophysical exploration. However, little is known on the correlation between offshore rifted basins and the onshore continental core. In this paper, we explore through mapping and seismic data, how these structures reactivate pre-existing Mesozoic basement heterogeneities. The continental core appears to be relatively undeformed after the Triassic Indosinian orogeny. The thick crustal mega-horst is bounded by complex shear zones (Ranong, Klong Marui and Main Range Batholith Fault Zones) initiated during the Late Cretaceous/Early Paleogene during a thick-skin transpressional deformation and later reactivated in the Late Paleogene. The extension is localized on the sides of this crustal backbone along a strip where earlier Late Cretaceous deformation is well expressed. To the west, the continental shelf is underlain by three major crustal steps which correspond to wide crustal-scale tilted blocks bounded by deep rooted counter regional normal faults (Mergui Basin). To the east, some pronounced rift systems are also present, with large tilted blocks (Western Thai, Songkhla and Chumphon basins) which may reflect large crustal boudins. In the central domain, the extension is limited to isolated narrow N-S half grabens developed on a thick continental crust, controlled by shallow rooted normal faults, which develop often at the contact between granitoids and the host-rocks. The outer limits of the areas affected by the crustal boudinage mark the boundary towards the large and deeper Andaman basin in the west and the Malay and Pattani basins in the east. At a regional scale, the rifted basins resemble N-S en-echelon structures along large NW-SE shear bands. The rifting is accommodated by large low angle normal faults (LANF) running along crustal morphostructures such as broad folds and Mesozoic batholiths. The deep Andaman, Malay and Pattani basins seem to sit on weaker crust inherited from Gondwana-derived continental blocks (Burma, Sibumasu, and Indochina). The set of narrow elongated basins in the core of the Region (Khien Sa, Krabi, and Malacca basins) suffered from a relatively lesser extension.
Development of a satellite-based nowcasting system for surface solar radiation
NASA Astrophysics Data System (ADS)
Limbach, Sebastian; Hungershoefer, Katja; Müller, Richard; Trentmann, Jörg; Asmus, Jörg; Schömer, Elmar; Groß, André
2014-05-01
The goal of the RadNowCast project was the development of a tool-chain for a satellite-based nowcasting of the all sky global and direct surface solar radiation. One important application of such short-term forecasts is the computation of the expected energy yield of photovoltaic systems. This information is of great importance for an efficient balancing of power generation and consumption in large, decentralized power grids. Our nowcasting approach is based on an optical-flow analysis of a series of Meteosat SEVIRI satellite images. For this, we extended and combined several existing software tools and set up a series of benchmarks for determining the optimal forecasting parameters. The first step in our processing-chain is the determination of the cloud albedo from the HRV (High Resolution Visible)-satellite images using a Heliosat-type method. The actual nowcasting is then performed by a commercial software system in two steps: First, vector fields characterizing the movement of the clouds are derived from the cloud albedo data from the previous 15 min to 2 hours. Next, these vector fields are combined with the most recent cloud albedo data in order to extrapolate the cloud albedo in the near future. In the last step of the processing, the Gnu-Magic software is used to calculate the global and direct solar radiation based on the forecasted cloud albedo data. For an evaluation of the strengths and weaknesses of our nowcastig system, we analyzed four different benchmarks, each of which covered different weather conditions. We compared the forecasted data with radiation data derived from the real satellite images of the corresponding time steps. The impact of different parameters on the cloud albedo nowcasting and the surface radiation computation has been analysed. Additionally, we could show that our cloud-albedo-based forecasts outperform forecasts based on the original HRV images. Possible future extension are the incorporation of additional data sources, for example NWC-SAF high resolution wind fields, in order to improve the quality of the atmospheric motion fields, and experiments with custom, optimized software components for the optical-flow estimation and the nowcasting.
Classical BV Theories on Manifolds with Boundary
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Mnev, Pavel; Reshetikhin, Nicolai
2014-12-01
In this paper we extend the classical BV framework to gauge theories on spacetime manifolds with boundary. In particular, we connect the BV construction in the bulk with the BFV construction on the boundary and we develop its extension to strata of higher codimension in the case of manifolds with corners. We present several examples including electrodynamics, Yang-Mills theory and topological field theories coming from the AKSZ construction, in particular, the Chern-Simons theory, the BF theory, and the Poisson sigma model. This paper is the first step towards developing the perturbative quantization of such theories on manifolds with boundary in a way consistent with gluing.
QGene 4.0, an extensible Java QTL-analysis platform.
Joehanes, Roby; Nelson, James C
2008-12-01
Of many statistical methods developed to date for quantitative trait locus (QTL) analysis, only a limited subset are available in public software allowing their exploration, comparison and practical application by researchers. We have developed QGene 4.0, a plug-in platform that allows execution and comparison of a variety of modern QTL-mapping methods and supports third-party addition of new ones. The software accommodates line-cross mating designs consisting of any arbitrary sequence of selfing, backcrossing, intercrossing and haploid-doubling steps that includes map, population, and trait simulators; and is scriptable. Software and documentation are available at http://coding.plantpath.ksu.edu/qgene. Source code is available on request.
Flight test trajectory control analysis
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1983-01-01
Recent extensions to optimal control theory applied to meaningful linear models with sufficiently flexible software tools provide powerful techniques for designing flight test trajectory controllers (FTTCs). This report describes the principal steps for systematic development of flight trajectory controllers, which can be summarized as planning, modeling, designing, and validating a trajectory controller. The techniques have been kept as general as possible and should apply to a wide range of problems where quantities must be computed and displayed to a pilot to improve pilot effectiveness and to reduce workload and fatigue. To illustrate the approach, a detailed trajectory guidance law is developed and demonstrated for the F-15 aircraft flying the zoom-and-pushover maneuver.
Tank Tests of Models of Flying Boat Hulls Having Longitudinal Steps
NASA Technical Reports Server (NTRS)
Allison, John M; Ward, Kenneth E
1936-01-01
Four models with longitudinal steps on the forebody were developed by modification of a model of a conventional hull and were tested in the National Advisory Committee for Aeronautics (NACA) tank. Models with longitudinal steps were found to have smaller resistance at high speed and greater resistance at low speed than the parent model that had the same afterbody but a conventional V-section forebody. The models with a single longitudinal step had better performance at hump speed and as low high-speed resistance except at very light loads. Spray strips at angles from 0 degrees to 45 degrees to the horizontal were fitted at the longitudinal steps and at the chine on one of the two step models having two longitudinal steps. The resistance and the height of the spray were less with each of the spray strips than without; the most favorable angle was found to lie between 15 degrees and 30 degrees.
Productivity improvement through cycle time analysis
NASA Astrophysics Data System (ADS)
Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio
1996-09-01
A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.
RNA Interference: Biology, Mechanism, and Applications
Agrawal, Neema; Dasaradhi, P. V. N.; Mohmmed, Asif; Malhotra, Pawan; Bhatnagar, Raj K.; Mukherjee, Sunil K.
2003-01-01
Double-stranded RNA-mediated interference (RNAi) is a simple and rapid method of silencing gene expression in a range of organisms. The silencing of a gene is a consequence of degradation of RNA into short RNAs that activate ribonucleases to target homologous mRNA. The resulting phenotypes either are identical to those of genetic null mutants or resemble an allelic series of mutants. Specific gene silencing has been shown to be related to two ancient processes, cosuppression in plants and quelling in fungi, and has also been associated with regulatory processes such as transposon silencing, antiviral defense mechanisms, gene regulation, and chromosomal modification. Extensive genetic and biochemical analysis revealed a two-step mechanism of RNAi-induced gene silencing. The first step involves degradation of dsRNA into small interfering RNAs (siRNAs), 21 to 25 nucleotides long, by an RNase III-like activity. In the second step, the siRNAs join an RNase complex, RISC (RNA-induced silencing complex), which acts on the cognate mRNA and degrades it. Several key components such as Dicer, RNA-dependent RNA polymerase, helicases, and dsRNA endonucleases have been identified in different organisms for their roles in RNAi. Some of these components also control the development of many organisms by processing many noncoding RNAs, called micro-RNAs. The biogenesis and function of micro-RNAs resemble RNAi activities to a large extent. Recent studies indicate that in the context of RNAi, the genome also undergoes alterations in the form of DNA methylation, heterochromatin formation, and programmed DNA elimination. As a result of these changes, the silencing effect of gene functions is exercised as tightly as possible. Because of its exquisite specificity and efficiency, RNAi is being considered as an important tool not only for functional genomics, but also for gene-specific therapeutic activities that target the mRNAs of disease-related genes. PMID:14665679
NASA Astrophysics Data System (ADS)
Yang, Zhang; Renping, Zhang; Weihua, Han; Jian, Liu; Xiang, Yang; Ying, Wang; Chian Chiu, Li; Fuhua, Yang
2009-11-01
A two-step exposure method to effectively reduce the proximity effect in fabricating nanometer-spaced nanopillars is presented. In this method, nanopillar patterns on poly-methylmethacrylate (PMMA) were partly cross-linked in the first-step exposure. After development, PMMA between nanopillar patterns was removed, and hence the proximity effect would not take place there in the subsequent exposure. In the second-step exposure, PMMA masks were completely cross-linked to achieve good resistance in inductively coupled plasma etching. Accurate pattern transfer of rows of nanopillars with spacing down to 40 nm was realized on a silicon-on-insulator substrate.
Development of a carbonate platform with potential for large discoveries - an example from Vietnam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayall, M.; Bent, A.; Dale, B.
1996-01-01
In offshore central and southern Vietnam a number of carbonate accumulations can be recognized. Platform carbonates form basin-wide units of carbonate characterized by strong, continuous parallel seismic reflectors. Facies are dominated by bioclastic wackestones with poor-moderate reservoir quality. On the more isolated highs, large buildups developed. These are typically 5-10 km across and 300 m thick. They unconformably overlie the platform carbonate facies which are extensively karstified. In places these are pinnacles, typically 2-5 km across, 300 m+ thick with chaotic or mounded internal seismic facies. The large carbonate buildups are characterized by steep sided slopes with talus cones, reef-marginmore » rims usually developed around only part of the buildup, and a prominent back-stepping geometry. Buildup interior facies form the main potential reservoirs They are dominated by fine to coarse grained coralgal packstones. Fine grained carbonates are associated with deeper water events and multiple karst surfaces can also be identified. Reservoir quality is excellent, largely controlled by extensive dissolution and dolomitization believed to be related to the exposure events. Gas has been found in a number of reservoirs. Heterogeneities can be recognized which could potentially effect production. These include the extensive finer grained facies, cementation or open fissures associated with the karst surfaces, a more cemented reef rim, shallowing upwards facies cycles and faults.« less
Development of a carbonate platform with potential for large discoveries - an example from Vietnam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayall, M.; Bent, A.; Dale, B.
1996-12-31
In offshore central and southern Vietnam a number of carbonate accumulations can be recognized. Platform carbonates form basin-wide units of carbonate characterized by strong, continuous parallel seismic reflectors. Facies are dominated by bioclastic wackestones with poor-moderate reservoir quality. On the more isolated highs, large buildups developed. These are typically 5-10 km across and 300 m thick. They unconformably overlie the platform carbonate facies which are extensively karstified. In places these are pinnacles, typically 2-5 km across, 300 m+ thick with chaotic or mounded internal seismic facies. The large carbonate buildups are characterized by steep sided slopes with talus cones, reef-marginmore » rims usually developed around only part of the buildup, and a prominent back-stepping geometry. Buildup interior facies form the main potential reservoirs They are dominated by fine to coarse grained coralgal packstones. Fine grained carbonates are associated with deeper water events and multiple karst surfaces can also be identified. Reservoir quality is excellent, largely controlled by extensive dissolution and dolomitization believed to be related to the exposure events. Gas has been found in a number of reservoirs. Heterogeneities can be recognized which could potentially effect production. These include the extensive finer grained facies, cementation or open fissures associated with the karst surfaces, a more cemented reef rim, shallowing upwards facies cycles and faults.« less
Diabetes mellitus: The epidemic of the century
Kharroubi, Akram T; Darwish, Hisham M
2015-01-01
The epidemic nature of diabetes mellitus in different regions is reviewed. The Middle East and North Africa region has the highest prevalence of diabetes in adults (10.9%) whereas, the Western Pacific region has the highest number of adults diagnosed with diabetes and has countries with the highest prevalence of diabetes (37.5%). Different classes of diabetes mellitus, type 1, type 2, gestational diabetes and other types of diabetes mellitus are compared in terms of diagnostic criteria, etiology and genetics. The molecular genetics of diabetes received extensive attention in recent years by many prominent investigators and research groups in the biomedical field. A large array of mutations and single nucleotide polymorphisms in genes that play a role in the various steps and pathways involved in glucose metabolism and the development, control and function of pancreatic cells at various levels are reviewed. The major advances in the molecular understanding of diabetes in relation to the different types of diabetes in comparison to the previous understanding in this field are briefly reviewed here. Despite the accumulation of extensive data at the molecular and cellular levels, the mechanism of diabetes development and complications are still not fully understood. Definitely, more extensive research is needed in this field that will eventually reflect on the ultimate objective to improve diagnoses, therapy and minimize the chance of chronic complications development. PMID:26131326
The strengths and weaknesses of inverted pendulum models of human walking.
McGrath, Michael; Howard, David; Baker, Richard
2015-02-01
An investigation into the kinematic and kinetic predictions of two "inverted pendulum" (IP) models of gait was undertaken. The first model consisted of a single leg, with anthropometrically correct mass and moment of inertia, and a point mass at the hip representing the rest of the body. A second model incorporating the physiological extension of a head-arms-trunk (HAT) segment, held upright by an actuated hip moment, was developed for comparison. Simulations were performed, using both models, and quantitatively compared with empirical gait data. There was little difference between the two models' predictions of kinematics and ground reaction force (GRF). The models agreed well with empirical data through mid-stance (20-40% of the gait cycle) suggesting that IP models adequately simulate this phase (mean error less than one standard deviation). IP models are not cyclic, however, and cannot adequately simulate double support and step-to-step transition. This is because the forces under both legs augment each other during double support to increase the vertical GRF. The incorporation of an actuated hip joint was the most novel change and added a new dimension to the classic IP model. The hip moment curve produced was similar to those measured during experimental walking trials. As a result, it was interpreted that the primary role of the hip musculature in stance is to keep the HAT upright. Careful consideration of the differences between the models throws light on what the different terms within the GRF equation truly represent. Copyright © 2014 Elsevier B.V. All rights reserved.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
2012-01-01
Abstract Recent progress in stem cell biology, notably cell fate conversion, calls for novel theoretical understanding for cell differentiation. The existing qualitative concept of Waddington’s “epigenetic landscape” has attracted particular attention because it captures subsequent fate decision points, thus manifesting the hierarchical (“tree-like”) nature of cell fate diversification. Here, we generalized a recent work and explored such a developmental landscape for a two-gene fate decision circuit by integrating the underlying probability landscapes with different parameters (corresponding to distinct developmental stages). The change of entropy production rate along the parameter changes indicates which parameter changes can represent a normal developmental process while other parameters’ change can not. The transdifferentiation paths over the landscape under certain conditions reveal the possibility of a direct and reversible phenotypic conversion. As the intensity of noise increases, we found that the landscape becomes flatter and the dominant paths more straight, implying the importance of biological noise processing mechanism in development and reprogramming. We further extended the landscape of the one-step fate decision to that for two-step decisions in central nervous system (CNS) differentiation. A minimal network and dynamic model for CNS differentiation was firstly constructed where two three-gene motifs are coupled. We then implemented the SDEs (Stochastic Differentiation Equations) simulation for the validity of the network and model. By integrating the two landscapes for the two switch gene pairs, we constructed the two-step development landscape for CNS differentiation. Our work provides new insights into cellular differentiation and important clues for better reprogramming strategies. PMID:23300518
ERIC Educational Resources Information Center
Vance, Barbara
This paper suggests two steps in instructional deisgn for early childhood that can be derived from a recent major paper on instructional strategy taxonomy. These steps, together with the instructional design variables involved in each step, are reviewed relative to current research in child development and early education. The variables reviewed…
Digital Learning Material for Student-Directed Model Building in Molecular Biology
ERIC Educational Resources Information Center
Aegerter-Wilmsen, Tinri; Coppens, Marjolijn; Janssen, Fred; Hartog, Rob; Bisseling, Ton
2005-01-01
The building of models to explain data and make predictions constitutes an important goal in molecular biology research. To give students the opportunity to practice such model building, two digital cases had previously been developed in which students are guided to build a model step by step. In this article, the development and initial…
Giant Steps: A Game to Enhance Semantic Development of Verbs.
ERIC Educational Resources Information Center
Entwisle, Doris R.; And Others
The game "Giant Steps" was described. It was designed to aid children in the semantic development of verbs. The purpose of the experimental evaluation was to determine whether playing the game actually did influence the associative structure of those verbs and adverbs that are "guessed" words in the game. Third graders from two classrooms in an…
STEPS: A NARRATIVE ACCOUNT OF A GABAPENTIN SEEDING TRIAL
Krumholz, Samuel D.; Egilman, David S.; Ross, Joseph S.
2012-01-01
Background Seeding trials, clinical studies conducted by pharmaceutical companies for marketing purposes, have rarely been described in detail. Methods We examined all documents relating to the clinical trial Study of Neurontin: Titrate to Effect, Profile of Safety (STEPS) produced during the Neurontin marketing, sales practices and product liability litigation, including company internal and external correspondence, reports, and presentations, as well as depositions elicited in legal proceedings of Harden Manufacturing v. Pfizer and Franklin v. Warner-Lambert, the majority of which were created between 1990 and 2009. Using a systematic search strategy, we identified and reviewed all documents related to the STEPS trial, in order to identify key themes related to the trial’s conduct and determine the extent of marketing involvement in its planning and implementation. Results Documents demonstrated that STEPS was a seeding trial posing as a legitimate scientific study. Documents consistently described the trial itself, not trial results, to be a marketing tactic in the company’s marketing plans. Documents demonstrated that several external sources questioned the validity of the study before execution, and that data quality during the study was often compromised. Furthermore, documents described company analyses examining the impact of participating as a STEPS investigator on rates and dosages of gabapentin prescribing, finding a positive association. None of these findings were reported in two published papers. Conclusions The STEPS trial was a seeding trial, used to promote gabapentin and increase prescribing among investigators, and marketing was extensively involved in its planning and implementation. PMID:21709111
Pang, Marco Y C; Yang, Jaynie F
2002-07-01
Humans can make smooth, continuous transitions in walking direction from forward to backward. Thus, the processing of sensory input must allow a similar continuum of possibilities. Hip extension and reduced load are two important conditions that control the transition from the stance to swing phase during forward stepping in human infants. The purpose of this study was to determine whether the same factors also regulate the initiation of the swing phase in other directions of stepping. Thirty-seven infants between the ages of 5 and 13 months were studied during supported forward and sideways stepping on a treadmill. Disturbances were elicited by placing a piece of cardboard under the foot and pulling the cardboard in different directions. In this way, the leg was displaced in a particular direction and simultaneously unloaded. We observed whether the swing phase was immediately initiated after the application of disturbances in various directions. Electromyography, vertical ground reaction forces, and hip motion in frontal and sagittal planes were recorded. The results showed that the most potent sensory input to initiate the swing phase depends on the direction of stepping. Although low load was always necessary to initiate swing for all directions of walking, the preferred hip position was always one directly opposite the direction of walking. The results indicated the presence of selective gating of sensory input from the legs as a function of the direction of stepping.
Subsystem real-time time dependent density functional theory.
Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele
2015-04-21
We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.
Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Debbio, Luigi; Patella, Agostino; Pica, Claudio
2010-05-01
We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. Formore » the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.« less
[Causal analysis approaches in epidemiology].
Dumas, O; Siroux, V; Le Moual, N; Varraso, R
2014-02-01
Epidemiological research is mostly based on observational studies. Whether such studies can provide evidence of causation remains discussed. Several causal analysis methods have been developed in epidemiology. This paper aims at presenting an overview of these methods: graphical models, path analysis and its extensions, and models based on the counterfactual approach, with a special emphasis on marginal structural models. Graphical approaches have been developed to allow synthetic representations of supposed causal relationships in a given problem. They serve as qualitative support in the study of causal relationships. The sufficient-component cause model has been developed to deal with the issue of multicausality raised by the emergence of chronic multifactorial diseases. Directed acyclic graphs are mostly used as a visual tool to identify possible confounding sources in a study. Structural equations models, the main extension of path analysis, combine a system of equations and a path diagram, representing a set of possible causal relationships. They allow quantifying direct and indirect effects in a general model in which several relationships can be tested simultaneously. Dynamic path analysis further takes into account the role of time. The counterfactual approach defines causality by comparing the observed event and the counterfactual event (the event that would have been observed if, contrary to the fact, the subject had received a different exposure than the one he actually received). This theoretical approach has shown limits of traditional methods to address some causality questions. In particular, in longitudinal studies, when there is time-varying confounding, classical methods (regressions) may be biased. Marginal structural models have been developed to address this issue. In conclusion, "causal models", though they were developed partly independently, are based on equivalent logical foundations. A crucial step in the application of these models is the formulation of causal hypotheses, which will be a basis for all methodological choices. Beyond this step, statistical analysis tools recently developed offer new possibilities to delineate complex relationships, in particular in life course epidemiology. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Kim, J H; Ferziger, R; Kawaloff, H B; Sands, D Z; Safran, C; Slack, W V
2001-01-01
Even the most extensive hospital information system cannot support all the complex and ever-changing demands associated with a clinical database, such as providing department or personal data forms, and rating scales. Well-designed clinical dialogue programs may facilitate direct interaction of patients with their medical records. Incorporation of extensive and loosely structured clinical data into an existing medical record system is an essential step towards a comprehensive clinical information system, and can best be achieved when the practitioner and the patient directly enter the contents. We have developed a rapid prototyping and clinical conversational system that complements the electronic medical record system, with its generic data structure and standard communication interfaces based on Web technology. We believe our approach can enhance collaboration between consumer-oriented and provider-oriented information systems.
Jung, Hungu; Yamasaki, Masahiro
2016-12-08
Reduced lower extremity range of motion (ROM) and muscle strength are related to functional disability in older adults who cannot perform one or more activities of daily living (ADL) independently. The purpose of this study was to determine which factors of seven lower extremity ROMs and two muscle strengths play dominant roles in the physical performance of community-dwelling older women. Ninety-five community-dwelling older women (mean age ± SD, 70.7 ± 4.7 years; age range, 65-83 years) were enrolled in this study. Seven lower extremity ROMs (hip flexion, hip extension, knee flexion, internal and external hip rotation, ankle dorsiflexion, and ankle plantar flexion) and two muscle strengths (knee extension and flexion) were measured. Physical performance tests, including functional reach test (FRT), 5 m gait test, four square step test (FSST), timed up and go test (TUGT), and five times sit-to-stand test (FTSST) were performed. Stepwise regression models for each of the physical performance tests revealed that hip extension ROM and knee flexion strength were important explanatory variables for FRT, FSST, and FTSST. Furthermore, ankle plantar flexion ROM and knee extension strength were significant explanatory variables for the 5 m gait test and TUGT. However, ankle dorsiflexion ROM was a significant explanatory variable for FRT alone. The amount of variance on stepwise multiple regression for the five physical performance tests ranged from 25 (FSST) to 47% (TUGT). Hip extension, ankle dorsiflexion, and ankle plantar flexion ROMs, as well as knee extension and flexion strengths may play primary roles in the physical performance of community-dwelling older women. Further studies should assess whether specific intervention programs targeting older women may achieve improvements in lower extremity ROM and muscle strength, and thereby play an important role in the prevention of dependence on daily activities and loss of physical function, particularly focusing on hip extension, ankle dorsiflexion, and ankle plantar flexion ROMs as well as knee extension and flexion strength.
Eijsvogel, Michiel M; Wiegersma, Sytske; Randerath, Winfried; Verbraecken, Johan; Wegter-Hilbers, Esther; van der Palen, Job
2016-04-15
To develop and evaluate a screening questionnaire and a two-step screening strategy for obstructive sleep apnea syndrome (OSAS) in healthy workers. This is a cross-sectional study of 1,861 employees comprising healthy blue- and white-collar workers in two representative plants in the Netherlands from a worldwide consumer electronic company who were approached to participate. Employees were invited to complete various sleep questionnaires, and undergo separate single nasal flow recording and home polysomnography on two separate nights. Of the 1,861 employees, 249 provided informed consent and all nasal flow and polysomnography data were available from 176 (70.7%). OSAS was diagnosed in 65 (36.9%). A combination of age, absence of insomnia, witnessed breathing stops, and three-way scoring of the Berlin and STOPBANG questionnaires best predicted OSAS. Factor analysis identified a six-factor structure of the resulting new questionnaire: snoring, snoring severity, tiredness, witnessed apneas, sleep quality, and daytime well-being. Subsequently, some questions were removed, and the remaining questions were used to construct a new questionnaire. A scoring algorithm, computing individual probabilities of OSAS as high, intermediate, or low risk, was developed. Subsequently, the intermediate risk group was split into low and high probability for OSAS, based on nasal flow recording. This two-step approach showed a sensitivity of 63.1%, and a specificity of 90.1%. Specificity is important for low prevalence populations. A two-step screening strategy with a new questionnaire and subsequent nasal flow recording is a promising way to screen for OSAS in a healthy worker population. Development and validation of a screening instrument for obstructive sleep apnea syndrome in healthy workers. Netherlands Trial Register (www.trailregister.nl), number: NTR2675. © 2016 American Academy of Sleep Medicine.
Finite Memory Walk and Its Application to Small-World Network
NASA Astrophysics Data System (ADS)
Oshima, Hiraku; Odagaki, Takashi
2012-07-01
In order to investigate the effects of cycles on the dynamical process on both regular lattices and complex networks, we introduce a finite memory walk (FMW) as an extension of the simple random walk (SRW), in which a walker is prohibited from moving to sites visited during m steps just before the current position. This walk interpolates the simple random walk (SRW), which has no memory (m = 0), and the self-avoiding walk (SAW), which has an infinite memory (m = ∞). We investigate the FMW on regular lattices and clarify the fundamental characteristics of the walk. We find that (1) the mean-square displacement (MSD) of the FMW shows a crossover from the SAW at a short time step to the SRW at a long time step, and the crossover time is approximately equivalent to the number of steps remembered, and that the MSD can be rescaled in terms of the time step and the size of memory; (2) the mean first-return time (MFRT) of the FMW changes significantly at the number of remembered steps that corresponds to the size of the smallest cycle in the regular lattice, where ``smallest'' indicates that the size of the cycle is the smallest in the network; (3) the relaxation time of the first-return time distribution (FRTD) decreases as the number of cycles increases. We also investigate the FMW on the Watts--Strogatz networks that can generate small-world networks, and show that the clustering coefficient of the Watts--Strogatz network is strongly related to the MFRT of the FMW that can remember two steps.
Borsari, Brian; Hustad, John T.P.; Mastroleo, Nadine R.; Tevyaw, Tracy O’Leary; Barnett, Nancy P.; Kahler, Christopher W.; Short, Erica Eaton; Monti, Peter M.
2012-01-01
Objective Over the past two decades, colleges and universities have seen a large increase in the number of students referred to the administration for alcohol policies violations. However, a substantial portion of mandated students may not require extensive treatment. Stepped care may maximize treatment efficiency and greatly reduce the demands on campus alcohol programs. Method Participants in the study (N = 598) were college students mandated to attend an alcohol program following a campus-based alcohol citation. All participants received Step 1: a 15-minute Brief Advice session that included the provision of a booklet containing advice to reduce drinking. Participants were assessed six weeks after receiving the Brief Advice, and those who continued to exhibit risky alcohol use (n = 405) were randomized to Step 2, a 60–90 minute brief motivational intervention (BMI) (n = 211) or an assessment-only control (n = 194). Follow-up assessments were conducted 3, 6, and 9 months after Step 2. Results Results indicated that the participants who received a BMI significantly reduced the number of alcohol-related problems compared to those who received assessment-only, despite no significant group differences in alcohol use. In addition, low risk drinkers (n = 102; who reported low alcohol use and related harms at 6-week follow-up and were not randomized to stepped care) showed a stable alcohol use pattern throughout the follow-up period, indicating they required no additional intervention. Conclusion Stepped care is an efficient and cost-effective method to reduce harms associated with alcohol use by mandated students. PMID:22924334
GPU accelerated simulations of three-dimensional flow of power-law fluids in a driven cube
NASA Astrophysics Data System (ADS)
Jin, K.; Vanka, S. P.; Agarwal, R. K.; Thomas, B. G.
2017-01-01
Newtonian fluid flow in two- and three-dimensional cavities with a moving wall has been studied extensively in a number of previous works. However, relatively a fewer number of studies have considered the motion of non-Newtonian fluids such as shear thinning and shear thickening power law fluids. In this paper, we have simulated the three-dimensional, non-Newtonian flow of a power law fluid in a cubic cavity driven by shear from the top wall. We have used an in-house developed fractional step code, implemented on a Graphics Processor Unit. Three Reynolds numbers have been studied with power law index set to 0.5, 1.0 and 1.5. The flow patterns, viscosity distributions and velocity profiles are presented for Reynolds numbers of 100, 400 and 1000. All three Reynolds numbers are found to yield steady state flows. Tabulated values of velocity are given for the nine cases studied, including the Newtonian cases.
NASA Astrophysics Data System (ADS)
Arefinia, Zahra; Orouji, Ali A.
2009-02-01
The concept of dual-material gate (DMG) is applied to the carbon nanotube field-effect transistor (CNTFET) with doped source and drain extensions, and the features exhibited by the resulting new structure, i.e., the DMG-CNTFET structure, have been examined for the first time by developing a two-dimensional (2D) full quantum simulation. The simulations have been done by the self-consistent solution of 2D Poisson-Schrödinger equations, within the nonequilibrium Green's function (NEGF) formalism. The results show DMG-CNTFET decreases significantly leakage current and drain conductance and increases on-off current ratio and voltage gain as compared to the single material gate counterparts CNTFET. It is seen that short channel effects in this structure are suppressed because of the perceivable step in the surface potential profile, which screens the drain potential. Moreover, these unique features can be controlled by engineering the workfunction and length of the gate metals. Therefore, this work provides an incentive for further experimental exploration.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
NASA Astrophysics Data System (ADS)
Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.
2016-06-01
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.
Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing
NASA Astrophysics Data System (ADS)
Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko
2007-10-01
The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak secondary wave reproduction in some cases) consistency of the numerical modelling results with known stationary linear test solutions. Numerical stability of the developed model is investigated with respect to the reference state choice, modelling dynamics of a stationary front. The horizontally area-mean reference temperature proves to be the optimal stability warrant. The numerical scheme with explicit residual in the vertical forcing term becomes unstable for cross-frontal temperature differences exceeding 30 K. Stability is restored, if the vertical forcing is treated implicitly, which enables to use time steps, comparable with the hydrostatic SISL.
Can mesenchymal cells undergo collective cell migration?
Theveneau, Eric
2011-01-01
Cell migration is critical for proper development of the embryo and is also used by many cell types to perform their physiological function. For instance, cell migration is essential for immune cells to monitor the body and for epithelial cells to heal a wound whereas, in cancer cells, acquisition of migratory capabilities is a critical step toward malignancy. Migratory cells are often categorized into two groups: (1) mesenchymal cells, produced by an epithelium-to-mesenchyme transition, that undergo solitary migration and (2) epithelial-like cells which migrate collectively. However, on some occasions, mesenchymal cells may travel in large, dense groups and exhibit key features of collectively migrating cells such as coordination and cooperation. Here, using data published on neural crest cells, a highly invasive mesenchymal cell population that extensively migrate throughout the embryo, we explore the idea that mesenchymal cells, including cancer cells, might be able to undergo collective cell migration under certain conditions and discuss how they could do so. PMID:22274714
NASA Astrophysics Data System (ADS)
Harvey, James E.
2012-10-01
Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.
Ribeiro, Luís Mata; Guerra, Ana Silva
2018-01-31
Hidrosadenitis supurativa is a chronic inflammatory disease with great physical and psychological impact. Although conservative treatments may be effective in mild forms of the disease, extensive surgical resection and reconstruction are necessary in more severe forms of the disease. The purpose of this paper is to describe our two-stage reconstructive procedure regarding this kind of disease. We present a clinical case of a patient with severe, bilateral axillary hidrosadenitis. In the first surgical step we excised the lesions and applied the artificial dermis secured with negative pressure wound therapy. In the second step we used a split thickness skin graft to close the wound and again applied negative pressure wound therapy. The graft take was very good, without complications. The cosmetic outcome is acceptable and shoulder mobility was not compromised. No recurrence was detected (nine months follow up).
Model-based reconfiguration: Diagnosis and recovery
NASA Technical Reports Server (NTRS)
Crow, Judy; Rushby, John
1994-01-01
We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.
Consistency functional map propagation for repetitive patterns
NASA Astrophysics Data System (ADS)
Wang, Hao
2017-09-01
Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.
The energy landscape of adenylate kinase during catalysis
Kerns, S. Jordan; Agafonov, Roman V.; Cho, Young-Jin; Pontiggia, Francesco; Otten, Renee; Pachov, Dimitar V.; Kutter, Steffen; Phung, Lien A.; Murphy, Padraig N.; Thai, Vu; Alber, Tom; Hagan, Michael F.; Kern, Dorothee
2014-01-01
Kinases perform phosphoryl-transfer reactions in milliseconds; without enzymes, these reactions would take about 8000 years under physiological conditions. Despite extensive studies, a comprehensive understanding of kinase energy landscapes, including both chemical and conformational steps, is lacking. Here we scrutinize the microscopic steps in the catalytic cycle of adenylate kinase, through a combination of NMR measurements during catalysis, pre-steady-state kinetics, MD simulations, and crystallography of active complexes. We find that the Mg2+ cofactor activates two distinct molecular events, phosphoryl transfer (>105-fold) and lid-opening (103-fold). In contrast, mutation of an essential active-site arginine decelerates phosphoryl transfer 103-fold without substantially affecting lid-opening. Our results highlight the importance of the entire energy landscape in catalysis and suggest that adenylate kinases have evolved to activate key processes simultaneously by precise placement of a single, charged and very abundant cofactor in a pre-organized active site. PMID:25580578
A one-dimensional nonlinear problem of thermoelasticity in extended thermodynamics
NASA Astrophysics Data System (ADS)
Rawy, E. K.
2018-06-01
We solve a nonlinear, one-dimensional initial boundary-value problem of thermoelasticity in generalized thermodynamics. A Cattaneo-type evolution equation for the heat flux is used, which differs from the one used extensively in the literature. The hyperbolic nature of the associated linear system is clarified through a study of the characteristic curves. Progressive wave solutions with two finite speeds are noted. A numerical treatment is presented for the nonlinear system using a three-step, quasi-linearization, iterative finite-difference scheme for which the linear system of equations is the initial step in the iteration. The obtained results are discussed in detail. They clearly show the hyperbolic nature of the system, and may be of interest in investigating thermoelastic materials, not only at low temperatures, but also during high temperature processes involving rapid changes in temperature as in laser treatment of surfaces.
Axial Flow Conditioning Device for Mitigating Instabilities
NASA Technical Reports Server (NTRS)
Ahuja, Vineet (Inventor); Birkbeck, Roger M. (Inventor); Hosangadi, Ashvin (Inventor)
2017-01-01
A flow conditioning device for incrementally stepping down pressure within a piping system is presented. The invention includes an outer annular housing, a center element, and at least one intermediate annular element. The outer annular housing includes an inlet end attachable to an inlet pipe and an outlet end attachable to an outlet pipe. The outer annular housing and the intermediate annular element(s) are concentrically disposed about the center element. The intermediate annular element(s) separates an axial flow within the outer annular housing into at least two axial flow paths. Each axial flow path includes at least two annular extensions that alternately and locally direct the axial flow radially outward and inward or radially inward and outward thereby inducing a pressure loss or a pressure gradient within the axial flow. The pressure within the axial flow paths is lower than the pressure at the inlet end and greater than the vapor pressure for the axial flow. The invention minimizes fluidic instabilities, pressure pulses, vortex formation and shedding, and/or cavitation during pressure step down to yield a stabilized flow within a piping system.
Challenges in Liquid-Phase Exfoliation, Processing, and Assembly of Pristine Graphene.
Parviz, Dorsa; Irin, Fahmida; Shah, Smit A; Das, Sriya; Sweeney, Charles B; Green, Micah J
2016-10-01
Recent developments in the exfoliation, dispersion, and processing of pristine graphene (i.e., non-oxidized graphene) are described. General metrics are outlined that can be used to assess the quality and processability of various "graphene" products, as well as metrics that determine the potential for industrial scale-up. The pristine graphene production process is categorized from a chemical engineering point of view with three key steps: i) pretreatment, ii) exfoliation, and iii) separation. How pristine graphene colloidal stability is distinct from the exfoliation step and is dependent upon graphene interactions with solvents and dispersants are extensively reviewed. Finally, the challenges and opportunities of using pristine graphene as nanofillers in polymer composites, as well as as building blocks for macrostructure assemblies are summarized in the context of large-scale production. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Toxin detection using a fiber-optic-based biosensor
NASA Astrophysics Data System (ADS)
Ogert, Robert A.; Shriver-Lake, Lisa C.; Ligler, Frances S.
1993-05-01
Using an evanescent wave fiber optic-based biosensor developed at Naval Research Laboratory, ricin toxin can be detected in the low ng/ml range. Sensitivity was established at 1 - 5 ng/ml using a two-step assay. The two-step assay showed enhanced signal levels in comparison to a one-step assay. A two-step assay utilizes a 10 minute incubation of an immobilized affinity purified anti-ricin antibody fiber optic probe in the ricin sample before placement in a solution of fluorophore-labeled goat anti-ricin antibodies. The specific fluorescent signal is obtained by the binding of the fluorophore-labeled antibodies to ricin which is bound by the immobilized antibodies on the fiber optic probe. The toxin can be detected directly from urine and river water using this fiber optic assay.
Development of a Novel Approach for Fatigue Life Prediction of Structural Materials
2008-12-01
applied when the crack length was 8.45 mm and 14.96 mm, respectively, on these two specimens. A third specimen was subjected to a constant amplitude...The crack growth rate at the middle point (the third point) was determined from the derivative of the parabola. The stress intensity factor for...minimum load was identical in the two loading steps (Fig. 32(b)). The third specimen experienced two-step loading with identical /?-ratio in the two
ERIC Educational Resources Information Center
Martin, Linda E.; Shafer, Tracy; Kragler, Sherry
2009-01-01
There is no denying that combining two schools, or even opening a new school, is loaded with challenges and frustrations as well as high expectations. Principal Tracy Shafer saw a rural school consolidation as an opportunity to use professional development to create a community focused on student learning, meeting the need for high-quality…
[Community health in primary health care teams: a management objective].
Nebot Adell, Carme; Pasarin Rua, Maribel; Canela Soler, Jaume; Sala Alvarez, Clara; Escosa Farga, Alex
2016-12-01
To describe the process of development of community health in a territory where the Primary Health Care board decided to include it in its roadmap as a strategic line. Evaluative research using qualitative techniques, including SWOT analysis on community health. Two-steps study. Primary care teams (PCT) of the Catalan Health Institute in Barcelona city. The 24 PCT belonging to the Muntanya-Dreta Primary Care Service in Barcelona city, with 904 professionals serving 557,430 inhabitants. Application of qualitative methodology using SWOT analysis in two steps (two-step study). Step 1: Setting up a core group consisting of local PCT professionals; collecting the community projects across the territory; SWOT analysis. Step 2: From the needs identified in the previous phase, a plan was developed, including a set of training activities in community health: basic, advanced, and a workshop to exchange experiences from the PCTs. A total of 80 team professionals received specific training in the 4 workshops held, one of them an advanced level. Two workshops were held to exchange experiences with 165 representatives from the local teams, and 22 PCTs presenting their practices. In 2013, 6 out of 24 PCTs have had a community diagnosis performed. Community health has achieved a good level of development in some areas, but this is not the general situation in the health care system. Its progression depends on the management support they have, the local community dynamics, and the scope of the Primary Health Care. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Gasse, Angela; Pfeiffer, Heidi; Köhler, Helga; Schürenkamp, Jennifer
2016-07-01
The aim of this work was to develop and validate a solid-phase extraction (SPE) method for the analysis of cannabinoids with emphasis on a very extensive and effective matrix reduction in order to ensure constant good results in selectivity and sensitivity regardless of the applied measuring technology. This was obtained by the use of an anion exchange sorbent (AXS) and the purposive ionic interaction between matrix components and this sorbent material. In a first step, the neutral cannabinoids ∆9-tetrahydrocannabinol (THC) and 11-hydroxy-∆9-tetrahydrocannabinol (11-OH-THC) were eluted, leaving 11-nor-9-carboxy-∆9-tetrahydrocannabinol (THC-COOH) and the main interfering matrix components bound to the AXS. In a second step, exploiting differences in pH and polarity, it was possible to separate matrix components and THC-COOH, thereby yielding a clean elution of THC-COOH into the same collecting tube as THC and 11-OH-THC. Even when using a simple measuring technology like gas chromatography with single quadrupole mass spectrometry, this two-step elution allows for an obvious decrease in number and intensity of matrix interference in the chromatogram. Hence, in both plasma and serum, the AXS extracts resulted in very good selectivity. Limits of detection and limits of quantification were below 0.25 and 0.35 ng/mL for the neutral cannabinoids in both matrices, 2.0 and 3.0 ng/mL in plasma and 1.6 and 3.3 ng/mL in serum for THC-COOH. The recoveries were ≥79.8 % for all analytes. Interday and intraday imprecisions ranged from 0.8 to 6.1 % relative standard deviation, and accuracy bias ranged from -12.6 to 3.6 %.
Sausedo, R A; Schoenwolf, G C
1993-09-01
Formation and extension of the notochord is one of the earliest and most obvious events of axis development in vertebrate embryos. In birds, prospective notochord cells arise from Hensen's node and come to lie beneath the midline of the neural plate, where they assist in the process of neurulation and initiate the dorsoventral patterning of the neural tube through sequential inductive interactions. In the present study, we examined notochord development in avian embryos with quantitative and immunological procedures. Extension of the notochord occurs principally through accretion, that is, the addition of cells to its caudal end, a process that involves considerable cell rearrangement at the notochord-Hensen's node interface. In addition, cell division and cell rearrangement within the notochord proper contribute to notochord extension. Thus, extension of the notochord occurs in a manner that is significantly different from that of the adjacent, overlying, midline region of the neural plate (i.e., the median hinge-point region or future floor plate of the neural tube), which as shown in one of the previous studies from our laboratory (Schoenwolf and Alvarez: Development 106:427-439, 1989), extends caudally as its cells undergo two rounds of mediolateral cell-cell intercalation and two-three rounds of cell division.
Experimental study on the stability and failure of individual step-pool
NASA Astrophysics Data System (ADS)
Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin
2018-06-01
Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.
NASA Technical Reports Server (NTRS)
Jovic, Srba; Kutler, Paul F. (Technical Monitor)
1994-01-01
Experimental results for a two-dimensional separated turbulent boundary layer behind a backward facing step for five different Reynolds numbers are reported. Results are presented in the form of tables, graphs and a floppy disk for an easy access of the data. Reynolds number based on the step height was varied by changing the reference velocity upstream of the step, U(sub o), and the step height, h. Hot-wire measurement techniques were used to measure three Reynolds stresses and four triple-velocity correlations. In addition, surface pressure and skin friction coefficients were measured. All hot-wire measurements were acquired in a measuring domain which excluded recirculating flow region due to the directional insensitivity of hot-wires. The downstream extent of the domain from the step was 51 h for the largest and I 14h for the smallest step height. This significant downstream length permitted extensive study of the flow recovery. Prediction of perturbed flows and their recovery is particularly attractive for popular turbulence models since variations of turbulence length and time scales and flow interactions in different regions are generally inadequately predicted. The data indicate that the flow in the free shear layer region behaves like the plane mixing layer up to about 2/3 of the mean reattachment length when the flow interaction with the wall commences the flow recovery to that of an ordinary turbulent boundary layer structure. These changes of the flow do not occur abruptly with the change of boundary conditions. A reattachment region represents a transitional region where the flow undergoes the most dramatic adjustments to the new boundary conditions. Large eddies, created in the upstream free-shear layer region, are being torn, recirculated, reentrained back into the main stream interacting with the incoming flow structure. It is foreseeable that it is quite difficult to describe the physics of this region in a rational and quantitative manner other than statistical. Downstream of the reattachment point the flow recovers at different rates near the wall, in the newly developing internal boundary layer, and in the outer part of the flow. It appears that Reynolds stresses do not fully recover up to the longest recovery length of 114 h.
Bugarel, M; Tudor, A; Loneragan, G H; Nightingale, K K
2017-03-01
Foodborne illnesses due to Salmonella represent an important public-health concern worldwide. In the United States, a majority of Salmonella infections are associated with a small number of serotypes. Furthermore, some serotypes that are overrepresented among human disease are also associated with multi-drug resistance phenotypes. Rapid detection of serotypes of public-health concern might help reduce the burden of salmonellosis cases and limit exposure to multi-drug resistant Salmonella. We developed a two-step real-time PCR-based rapid method for the identification and detection of five Salmonella serotypes that are either overrepresented in human disease or frequently associated with multi-drug resistance, including serotypes Enteritidis, Typhimurium, Newport, Hadar, and Heidelberg. Two sets of four markers were developed to detect and differentiate the five serotypes. The first set of markers was developed as a screening step to detect the five serotypes; whereas, the second set was used to further distinguish serotypes Heidelberg, Newport and Hadar. The utilization of these markers on a two-step investigation strategy provides a diagnostic specificity of 97% for the detection of Typhimurium, Enteritidis, Heidelberg, Infantis, Newport and Hadar. The diagnostic sensitivity of the detection makers is >96%. The availability of this two-step rapid method will facilitate specific detection of Salmonella serotypes that contribute to a significant proportion of human disease and carry antimicrobial resistance. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Duperret, Anne; Vandycke, Sara; Colbeaux, Jean-Pierre; Raimbault, Celine; Duguet, Timothée; Van vliet-lanoe, Brigitte
2017-04-01
Chalky hillslopes observed in Picardy region (NW Paris basin, France) evidence specific surficial ridges and steps, of several meters high and several ten-meters length, roughly parallel oriented to slopes on some dry valleys. They are locally named "rideaux" or strip-lynchets. Their origin is still discussed among the communities of geology, geography, archeology and pedology. Detailed observations of the Picardy coastal chalk cliffs using high resolution low-lying aerial LiDAR and field works allow us to precisely describe and understand ridges and steps formation. At Bois de Cise, a rectangular depression with ridges and steps was observed in 3D on the ground, due to its natural overlap by the cliff face. This structure proves to be a graben, controlled by conjugate normal faults, at the top of which the ridges and steps are developed. The set forms a "step-graben" composed of a system of faults in relay and ramps, involved in the superficial covering of quaternary loess. Steps formation will be discussed in relation with the tectonic context (paleo-constraint fields), the continental water circulation within the karst, the presence of break-up structures on the fault planes, the role of cryogenic processes during the last glacial epochs and the remobilization of loess surface deposits. Caves and temporary springs of fresh water along faults evidence a karstic behavior in the chalk and suggests step-graben structures as geological guides for hydrogeological circulation in the chalk of Picardy. In this context, chalky surficial step-structures appears as tectonically controlled and as the witness of a recent active tectonics in the NW european chalk basin. In addition, the field of steps developed on a coastal fossil cliff tends to prove the occurrence of a fractured system, developed according to a paleo-field of NW-SE extensive stresses. Data from the CROCOLIT-Leg1 (Duperret, 2013) campaign carried out on the offshore subtidal platform (shallow bathymetry, THR Chirp seismic) help to better define the morphology and depth of penetration of this type of faults in the chalk and to answer the question of guidance by pre-existing fractures of the Picardy coastline orientation, on a kilometer scale. DUPERRET Anne (2013) CROCOLIT_LEG1 cruise, RV Haliotis, http://dx.doi.org/10.17600/13120080
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takashiri, Masayuki, E-mail: takashiri@tokai-u.jp; Kurita, Kensuke; Hagino, Harutoshi
2015-08-14
A two-step method that combines homogeneous electron beam (EB) irradiation and thermal annealing has been developed to enhance the thermoelectric properties of nanocrystalline bismuth selenium telluride thin films. The thin films, prepared using a flash evaporation method, were treated with EB irradiation in a N{sub 2} atmosphere at room temperature and an acceleration voltage of 0.17 MeV. Thermal annealing was performed under Ar/H{sub 2} (5%) at 300 °C for 60 min. X-ray diffraction was used to determine that compositional phase separation between bismuth telluride and bismuth selenium telluride developed in the thin films exposed to higher EB doses and thermal annealing. We proposemore » that the phase separation was induced by fluctuations in the distribution of selenium atoms after EB irradiation, followed by the migration of selenium atoms to more stable sites during thermal annealing. As a result, thin film crystallinity improved and mobility was significantly enhanced. This indicates that the phase separation resulting from the two-step method enhanced, rather than disturbed, the electron transport. Both the electrical conductivity and the Seebeck coefficient were improved following the two-step method. Consequently, the power factor of thin films that underwent the two-step method was enhanced to 20 times (from 0.96 to 21.0 μW/(cm K{sup 2}) that of the thin films treated with EB irradiation alone.« less
Structural Controls of the Emerson Pass Geothermal System, Washoe County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B; Faulds, James E
We have conducted a detailed geologic study to better characterize a blind geothermal system in Emerson Pass on the Pyramid Lake Paiute Tribe Reservation, western Nevada. A thermal anomaly was discovered in Emerson Pass by use of 2 m temperature surveys deployed within a structurally favorable setting and proximal to surface features indicative of geothermal activity. The anomaly lies at the western edge of a broad left step at the northeast end of Pyramid Lake between the north- to north-northeast-striking, west-dipping, Fox and Lake Range normal faults. The 2-m temperature surveys have defined a N-S elongate thermal anomaly that hasmore » a maximum recorded temperature of ~60°C and resides on a north- to north-northeaststriking fault. Travertine mounds, chalcedonic silica veins, and silica cemented Pleistocene lacustrine gravels in Emerson Pass indicate a robust geothermal system active at the surface in the recent past. Structural complexity and spatial heterogeneities of the strain and stress field have developed in the step-over region, but kinematic data suggest a WNW-trending (~280° azimuth) extension direction. The geothermal system is likely hosted in Emerson Pass as a result of enhanced permeability generated by the intersection of two oppositely dipping, southward terminating north- to north-northwest-striking (Fox Range fault) and northnortheast- striking faults.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binotti, M.; Zhu, G.; Gray, A.
An analytical approach, as an extension of one newly developed method -- First-principle OPTical Intercept Calculation (FirstOPTIC) -- is proposed to treat the geometrical impact of three-dimensional (3-D) effects on parabolic trough optical performance. The mathematical steps of this analytical approach are presented and implemented numerically as part of the suite of FirstOPTIC code. In addition, the new code has been carefully validated against ray-tracing simulation results and available numerical solutions. This new analytical approach to treating 3-D effects will facilitate further understanding and analysis of the optical performance of trough collectors as a function of incidence angle.
A methodology to migrate the gene ontology to a description logic environment using DAML+OIL.
Wroe, C J; Stevens, R; Goble, C A; Ashburner, M
2003-01-01
The Gene Ontology Next Generation Project (GONG) is developing a staged methodology to evolve the current representation of the Gene Ontology into DAML+OIL in order to take advantage of the richer formal expressiveness and the reasoning capabilities of the underlying description logic. Each stage provides a step level increase in formal explicit semantic content with a view to supporting validation, extension and multiple classification of the Gene Ontology. The paper introduces DAML+OIL and demonstrates the activity within each stage of the methodology and the functionality gained.
The Galics Project: Virtual Galaxy: from Cosmological N-body Simulations
NASA Astrophysics Data System (ADS)
Guiderdoni, B.
The GalICS project develops extensive semi-analytic post-processing of large cosmological simulations to describe hierarchical galaxy formation. The multiwavelength statistical properties of high-redshift and local galaxies are predicted within the large-scale structures. The fake catalogs and mock images that are generated from the outputs are used for the analysis and preparation of deep surveys. The whole set of results is now available in an on-line database that can be easily queried. The GalICS project represents a first step towards a 'Virtual Observatory of virtual galaxies'.
A new bead-spring model for simulation of semi-flexible macromolecules
NASA Astrophysics Data System (ADS)
Saadat, Amir; Khomami, Bamin
2016-11-01
A bead-spring model for semi-flexible macromolecules is developed to overcome the deficiencies of the current coarse-grained bead-spring models. Specifically, model improvements are achieved through incorporation of a bending potential. The new model is designed to accurately describe the correlation along the backbone of the chain, segmental length, and force-extension behavior of the macromolecule even at the limit of 1 Kuhn step per spring. The relaxation time of different Rouse modes is used to demonstrate the capabilities of the new model in predicting chain dynamics.
ERIC Educational Resources Information Center
Borsari, Brian; Hustad, John T. P.; Mastroleo, Nadine R.; Tevyaw, Tracy O'Leary; Barnett, Nancy P.; Kahler, Christopher W.; Short, Erica Eaton; Monti, Peter M.
2012-01-01
Objective: Over the past 2 decades, colleges and universities have seen a large increase in the number of students referred to the administration for alcohol policies violations. However, a substantial portion of mandated students may not require extensive treatment. Stepped care may maximize treatment efficiency and greatly reduce the demands on…
Standard specification for light ladders
NASA Astrophysics Data System (ADS)
1980-11-01
Requirements are given for the construction, inspection, and testing of single ladders, extension ladders, step-ladders, extending step-ladders, trestle ladders, extending trestle ladders, and special purpose ladders. The composition of the aluminum alloys, and of the steels and glass fiber reinforced polyester components coming into contact with the aluminum are included. Coatings used to provide corrosion resistance are also specified.
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
Partial Return Yoke for MICE Step IV and Final Step
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Holger; Plate, Stephen; Berg, J.Scott
2015-06-01
This paper reports on the progress of the design and construction of a retro-fitted return yoke for the international Muon Ionization Cooling Experiment (MICE). MICE is a proof-of-principle experiment aiming to demonstrate ionization cooling experimentally. In earlier studies we outlined how a partial return yoke can be used to mitigate stray magnetic field in the experimental hall; we report on the progress of the construction of the partial return yoke for MICE Step IV. We also discuss an extension of the Partial Return Yoke for the final step of MICE; we show simulation results of the expected performance.
Partial return yoke for MICE step IV and final step
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, H.; Plate, S.; Berg, J. S.
2015-05-03
This paper reports on the progress of the design and construction of a retro-fitted return yoke for the international Muon Ionization Cooling Experiment (MICE). MICE is a proof-of-principle experiment aiming to demonstrate ionization cooling experimentally. In earlier studies we outlined how a partial return yoke can be used to mitigate stray magnetic field in the experimental hall; we report on the progress of the construction of the partial return yoke for MICE Step IV. We also discuss an extension of the Partial Return Yoke for the final step of MICE; we show simulation results of the expected performance.
Lewis, Leslie A; Astatke, Mekbib; Umekubo, Peter T; Alvi, Shaheen; Saby, Robert; Afrose, Jehan; Oliveira, Pedro H; Monteiro, Gabriel A; Prazeres, Duarte Mf
2012-01-26
Transposition in IS3, IS30, IS21 and IS256 insertion sequence (IS) families utilizes an unconventional two-step pathway. A figure-of-eight intermediate in Step I, from asymmetric single-strand cleavage and joining reactions, is converted into a double-stranded minicircle whose junction (the abutted left and right ends) is the substrate for symmetrical transesterification attacks on target DNA in Step II, suggesting intrinsically different synaptic complexes (SC) for each step. Transposases of these ISs bind poorly to cognate DNA and comparative biophysical analyses of SC I and SC II have proven elusive. We have prepared a native, soluble, active, GFP-tagged fusion derivative of the IS2 transposase that creates fully formed complexes with single-end and minicircle junction (MCJ) substrates and used these successfully in hydroxyl radical footprinting experiments. In IS2, Step I reactions are physically and chemically asymmetric; the left imperfect, inverted repeat (IRL), the exclusive recipient end, lacks donor function. In SC I, different protection patterns of the cleavage domains (CDs) of the right imperfect inverted repeat (IRR; extensive in cis) and IRL (selective in trans) at the single active cognate IRR catalytic center (CC) are related to their donor and recipient functions. In SC II, extensive binding of the IRL CD in trans and of the abutted IRR CD in cis at this CC represents the first phase of the complex. An MCJ substrate precleaved at the 3' end of IRR revealed a temporary transition state with the IRL CD disengaged from the protein. We propose that in SC II, sequential 3' cleavages at the bound abutted CDs trigger a conformational change, allowing the IRL CD to complex to its cognate CC, producing the second phase. Corroborating data from enhanced residues and curvature propensity plots suggest that CD to CD interactions in SC I and SC II require IRL to assume a bent structure, to facilitate binding in trans. Different transpososomes are assembled in each step of the IS2 transposition pathway. Recipient versus donor end functions of the IRL CD in SC I and SC II and the conformational change in SC II that produces the phase needed for symmetrical IRL and IRR donor attacks on target DNA highlight the differences.
2012-01-01
Background Transposition in IS3, IS30, IS21 and IS256 insertion sequence (IS) families utilizes an unconventional two-step pathway. A figure-of-eight intermediate in Step I, from asymmetric single-strand cleavage and joining reactions, is converted into a double-stranded minicircle whose junction (the abutted left and right ends) is the substrate for symmetrical transesterification attacks on target DNA in Step II, suggesting intrinsically different synaptic complexes (SC) for each step. Transposases of these ISs bind poorly to cognate DNA and comparative biophysical analyses of SC I and SC II have proven elusive. We have prepared a native, soluble, active, GFP-tagged fusion derivative of the IS2 transposase that creates fully formed complexes with single-end and minicircle junction (MCJ) substrates and used these successfully in hydroxyl radical footprinting experiments. Results In IS2, Step I reactions are physically and chemically asymmetric; the left imperfect, inverted repeat (IRL), the exclusive recipient end, lacks donor function. In SC I, different protection patterns of the cleavage domains (CDs) of the right imperfect inverted repeat (IRR; extensive in cis) and IRL (selective in trans) at the single active cognate IRR catalytic center (CC) are related to their donor and recipient functions. In SC II, extensive binding of the IRL CD in trans and of the abutted IRR CD in cis at this CC represents the first phase of the complex. An MCJ substrate precleaved at the 3' end of IRR revealed a temporary transition state with the IRL CD disengaged from the protein. We propose that in SC II, sequential 3' cleavages at the bound abutted CDs trigger a conformational change, allowing the IRL CD to complex to its cognate CC, producing the second phase. Corroborating data from enhanced residues and curvature propensity plots suggest that CD to CD interactions in SC I and SC II require IRL to assume a bent structure, to facilitate binding in trans. Conclusions Different transpososomes are assembled in each step of the IS2 transposition pathway. Recipient versus donor end functions of the IRL CD in SC I and SC II and the conformational change in SC II that produces the phase needed for symmetrical IRL and IRR donor attacks on target DNA highlight the differences. PMID:22277150
ERIC Educational Resources Information Center
Lin, Sheau-Wen
2004-01-01
This study involved the development and application of a two-tier diagnostic test measuring students' understanding of flowering plant growth and development. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development.…
Niman, Cassandra S; Zuckermann, Martin J; Balaz, Martina; Tegenfeldt, Jonas O; Curmi, Paul M G; Forde, Nancy R; Linke, Heiner
2014-12-21
Synthetic molecular motors typically take nanometer-scale steps through rectification of thermal motion. Here we propose Inchworm, a DNA-based motor that employs a pronounced power stroke to take micrometer-scale steps on a time scale of seconds, and we design, fabricate, and analyze the nanofluidic device needed to operate the motor. Inchworm is a kbp-long, double-stranded DNA confined inside a nanochannel in a stretched configuration. Motor stepping is achieved through externally controlled changes in salt concentration (changing the DNA's extension), coordinated with ligand-gated binding of the DNA's ends to the functionalized nanochannel surface. Brownian dynamics simulations predict that Inchworm's stall force is determined by its entropic spring constant and is ∼ 0.1 pN. Operation of the motor requires periodic cycling of four different buffers surrounding the DNA inside a nanochannel, while keeping constant the hydrodynamic load force on the DNA. We present a two-layer fluidic device incorporating 100 nm-radius nanochannels that are connected through a few-nm-wide slit to a microfluidic system used for in situ buffer exchanges, either diffusionally (zero flow) or with controlled hydrodynamic flow. Combining experiment with finite-element modeling, we demonstrate the device's key performance features and experimentally establish achievable Inchworm stepping times of the order of seconds or faster.
Takahashi, Ohgi; Kirikoshi, Ryota; Manabe, Noriyoshi
2015-01-01
Succinimide formation from aspartic acid (Asp) residues is a concern in the formulation of protein drugs. Based on density functional theory calculations using Ace-Asp-Nme (Ace = acetyl, Nme = NHMe) as a model compound, we propose the possibility that acetic acid (AA), which is often used in protein drug formulation for mildly acidic buffer solutions, catalyzes the succinimide formation from Asp residues by acting as a proton-transfer mediator. The proposed mechanism comprises two steps: cyclization (intramolecular addition) to form a gem-diol tetrahedral intermediate and dehydration of the intermediate. Both steps are catalyzed by an AA molecule, and the first step was predicted to be rate-determining. The cyclization results from a bond formation between the amide nitrogen on the C-terminal side and the side-chain carboxyl carbon, which is part of an extensive bond reorganization (formation and breaking of single bonds and the interchange of single and double bonds) occurring concertedly in a cyclic structure formed by the amide NH bond, the AA molecule and the side-chain C=O group and involving a double proton transfer. The second step also involves an AA-mediated bond reorganization. Carboxylic acids other than AA are also expected to catalyze the succinimide formation by a similar mechanism. PMID:25588215
Takahashi, Ohgi; Kirikoshi, Ryota; Manabe, Noriyoshi
2015-01-12
Succinimide formation from aspartic acid (Asp) residues is a concern in the formulation of protein drugs. Based on density functional theory calculations using Ace-Asp-Nme (Ace = acetyl, Nme = NHMe) as a model compound, we propose the possibility that acetic acid (AA), which is often used in protein drug formulation for mildly acidic buffer solutions, catalyzes the succinimide formation from Asp residues by acting as a proton-transfer mediator. The proposed mechanism comprises two steps: cyclization (intramolecular addition) to form a gem-diol tetrahedral intermediate and dehydration of the intermediate. Both steps are catalyzed by an AA molecule, and the first step was predicted to be rate-determining. The cyclization results from a bond formation between the amide nitrogen on the C-terminal side and the side-chain carboxyl carbon, which is part of an extensive bond reorganization (formation and breaking of single bonds and the interchange of single and double bonds) occurring concertedly in a cyclic structure formed by the amide NH bond, the AA molecule and the side-chain C=O group and involving a double proton transfer. The second step also involves an AA-mediated bond reorganization. Carboxylic acids other than AA are also expected to catalyze the succinimide formation by a similar mechanism.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
Nerve stress during reverse total shoulder arthroplasty: a cadaveric study.
Lenoir, Hubert; Dagneaux, Louis; Canovas, François; Waitzenegger, Thomas; Pham, Thuy Trang; Chammas, Michel
2017-02-01
Neurologic lesions are relatively common after total shoulder arthroplasty. These injuries are mostly due to traction. We aimed to identify the arm manipulations and steps during reverse total shoulder arthroplasty (RTSA) that affect nerve stress. Stress was measured in 10 shoulders of 5 cadavers by use of a tensiometer on each nerve from the brachial plexus, with shoulders in different arm positions and during different surgical steps of RTSA. When we studied shoulder position without prostheses, relative to the neutral position, internal rotation increased stress on the radial and axillary nerves and external rotation increased stress on the musculocutaneous, median, and ulnar nerves. Extension was correlated with increase in stress on all nerves. Abduction was correlated with increase in stress for the radial nerve. We identified 2 high-risk steps during RTSA: humeral exposition, particularly when the shoulder was in a position of more extension, and glenoid exposition. The thickness of polyethylene humeral cups used was associated with increased nerve stress in all but the ulnar nerve. During humeral preparation, the surgeon must be careful to limit shoulder extension. Care must be taken during exposure of the glenoid. Extreme rotation and oversized implants should be avoided to minimize stretch-induced neuropathies. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Baumgardt, Kathrin; Gilet, Laetitia; Figaro, Sabine; Condon, Ciarán
2018-06-05
Ribosomal RNAs are processed from primary transcripts containing 16S, 23S and 5S rRNAs in most bacteria. Maturation generally occurs in a two-step process, consisting of a first crude separation of the major species by RNase III during transcription, followed by precise trimming of 5' and 3' extensions on each species upon accurate completion of subunit assembly. The various endo- and exoribonucleases involved in the final processing reactions are strikingly different in Escherichia coli and Bacillus subtilis, the two best studied representatives of Gram-negative and Gram-positive bacteria, respectively. Here, we show that the one exception to this rule is the protein involved in the maturation of the 3' end of 16S rRNA. Cells depleted for the essential B. subtilis YqfG protein, a homologue of E. coli YbeY, specifically accumulate 16S rRNA precursors bearing 3' extensions. Remarkably, the essential nature of YqfG can be suppressed by deleting the ribosomal RNA degrading enzyme RNase R, i.e. a ΔyqfG Δrnr mutant is viable. Our data suggest that 70S ribosomes containing 30S subunits with 3' extensions of 16S rRNA are functional to a degree, but become substrates for degradation by RNase R and are eliminated.
BioQueue: a novel pipeline framework to accelerate bioinformatics analysis.
Yao, Li; Wang, Heming; Song, Yuanyuan; Sui, Guangchao
2017-10-15
With the rapid development of Next-Generation Sequencing, a large amount of data is now available for bioinformatics research. Meanwhile, the presence of many pipeline frameworks makes it possible to analyse these data. However, these tools concentrate mainly on their syntax and design paradigms, and dispatch jobs based on users' experience about the resources needed by the execution of a certain step in a protocol. As a result, it is difficult for these tools to maximize the potential of computing resources, and avoid errors caused by overload, such as memory overflow. Here, we have developed BioQueue, a web-based framework that contains a checkpoint before each step to automatically estimate the system resources (CPU, memory and disk) needed by the step and then dispatch jobs accordingly. BioQueue possesses a shell command-like syntax instead of implementing a new script language, which means most biologists without computer programming background can access the efficient queue system with ease. BioQueue is freely available at https://github.com/liyao001/BioQueue. The extensive documentation can be found at http://bioqueue.readthedocs.io. li_yao@outlook.com or gcsui@nefu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Lopes, Rita; Videira, Nuno
2016-08-01
This paper presents an innovative approach for conducting collaborative scoping processes aiming to elicit multiple values of ecosystem services. The proposed methodology rests on three steps combining different participatory tools that promote a comprehensive examination of the perceptions hold by relevant stakeholder groups. The first step consists of an institutional and stakeholder analysis developed in the study area. The second includes a participatory workshop, where a sequence of scoping exercises is conducted with the active collaboration of the invited stakeholders. The final step aims to validate scoping results and develop dependency networks between organizations and the identified ecosystem services. The approach was tested in the Arrábida Natural Park, a marine and coastal protected area in Portugal. Invited participants were able to identify an extensive list of ecosystem services in the natural area, establish linkages between those services and human wellbeing, identify drivers of change and perform a preliminary screening of the associated ecological, social, and economic values. The case study evaluation provided positive feedback on the usefulness of the approach, which advances the existing set of methods for participatory identification of ecosystem services and sets the scene for involvement of stakeholder groups in assessment and management processes.
Genome-wide RNAi Screening to Identify Host Factors That Modulate Oncolytic Virus Therapy.
Allan, Kristina J; Mahoney, Douglas J; Baird, Stephen D; Lefebvre, Charles A; Stojdl, David F
2018-04-03
High-throughput genome-wide RNAi (RNA interference) screening technology has been widely used for discovering host factors that impact virus replication. Here we present the application of this technology to uncovering host targets that specifically modulate the replication of Maraba virus, an oncolytic rhabdovirus, and vaccinia virus with the goal of enhancing therapy. While the protocol has been tested for use with oncolytic Maraba virus and oncolytic vaccinia virus, this approach is applicable to other oncolytic viruses and can also be utilized for identifying host targets that modulate virus replication in mammalian cells in general. This protocol describes the development and validation of an assay for high-throughput RNAi screening in mammalian cells, the key considerations and preparation steps important for conducting a primary high-throughput RNAi screen, and a step-by-step guide for conducting a primary high-throughput RNAi screen; in addition, it broadly outlines the methods for conducting secondary screen validation and tertiary validation studies. The benefit of high-throughput RNAi screening is that it allows one to catalogue, in an extensive and unbiased fashion, host factors that modulate any aspect of virus replication for which one can develop an in vitro assay such as infectivity, burst size, and cytotoxicity. It has the power to uncover biotherapeutic targets unforeseen based on current knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Feroz Hassan; Vannoni, Michael Geoffrey; Rajen, Gaurav
India and Pakistan have created sizeable ballistic missile forces and are continuing to develop and enlarge them. These forces can be both stabilizing (e.g., providing a survivable force for deterrence) and destabilizing (e.g., creating strategic asymmetries). Missile forces will be a factor in bilateral relations for the foreseeable future, so restraint is necessary to curtail their destabilizing effects. Such restraint, however, must develop within an atmosphere of low trust. This report presents a set of political and operational options, both unilateral and bilateral, that decreases tensions, helps rebuild the bilateral relationship, and prepares the ground for future steps in structuralmore » arms control. Significant steps, which build on precedents and do not require extensive cooperation, are possible despite strained relations. The approach is made up of three distinct phases: (1) tension reduction measures, (2) confidence building measures, and (3) arms control agreements. The goal of the first phase is to initiate unilateral steps that are substantive and decrease tensions, establish missiles as a security topic for bilateral discussion, and set precedents for limited bilateral cooperation. The second phase would build confidence by expanding current bilateral security agreements, formalizing bilateral understandings, and beginning discussion of monitoring procedures. The third phase could include bilateral agreements limiting some characteristics of national missile forces including the cooperative incorporation of monitoring and verification.« less
NASA Astrophysics Data System (ADS)
Lopes, Rita; Videira, Nuno
2016-08-01
This paper presents an innovative approach for conducting collaborative scoping processes aiming to elicit multiple values of ecosystem services. The proposed methodology rests on three steps combining different participatory tools that promote a comprehensive examination of the perceptions hold by relevant stakeholder groups. The first step consists of an institutional and stakeholder analysis developed in the study area. The second includes a participatory workshop, where a sequence of scoping exercises is conducted with the active collaboration of the invited stakeholders. The final step aims to validate scoping results and develop dependency networks between organizations and the identified ecosystem services. The approach was tested in the Arrábida Natural Park, a marine and coastal protected area in Portugal. Invited participants were able to identify an extensive list of ecosystem services in the natural area, establish linkages between those services and human wellbeing, identify drivers of change and perform a preliminary screening of the associated ecological, social, and economic values. The case study evaluation provided positive feedback on the usefulness of the approach, which advances the existing set of methods for participatory identification of ecosystem services and sets the scene for involvement of stakeholder groups in assessment and management processes.
Evaluating imbalances of adverse events during biosimilar development
Vana, Alicia M.; Freyman, Amy W.; Reich, Steven D.; Yin, Donghua; Li, Ruifeng; Anderson, Scott; Jacobs, Ira A.; Zacharchuk, Charles M.; Ewesuedo, Reginald
2016-01-01
ABSTRACT Biosimilars are designed to be highly similar to approved or licensed (reference) biologics and are evaluated based on the totality of evidence from extensive analytical, nonclinical and clinical studies. As part of the stepwise approach recommended by regulatory agencies, the first step in the clinical evaluation of biosimilarity is to conduct a pharmacokinetics similarity study in which the potential biosimilar is compared with the reference product. In the context of biosimilar development, a pharmacokinetics similarity study is not necessarily designed for a comparative assessment of safety. Development of PF-05280014, a potential biosimilar to trastuzumab, illustrates how a numerical imbalance in an adverse event in a small pharmacokinetics study can raise questions on safety that may require additional clinical trials. PMID:27050730
Minami, Atsushi; Oguri, Hiroki; Watanabe, Kenji; Oikawa, Hideaki
2013-08-01
Diversity of natural polycyclic polyethers originated from very simple yet versatile strategy consisting of epoxidation of linear polyene followed by epoxide opening cascade. To understand two-step enzymatic transformations at molecular basis, a flavin containing monooxygenase (EPX) Lsd18 and an epoxide hydrolase (EH) Lsd19 were selected as model enzymes for extensive investigation on substrate specificity, catalytic mechanism, cofactor requirement and crystal structure. This pioneering study on prototypical lasalocid EPX and EH provides insight into detailed mechanism of ionophore polyether assembly machinery and clarified remaining issues for polyether biosynthesis. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Reinaerts, E.; De Nooijer, J.; De Vries, N. K.
2008-01-01
Purpose: The purpose of this paper is to show how the intervention mapping (IM) protocol could be applied to the development of two school-based interventions. It provides an extensive description of the development, implementation and evaluation of two interventions which aimed to increase fruit and vegetable (F&V) consumption among primary…
Coupling Damage-Sensing Particles to the Digitial Twin Concept
NASA Technical Reports Server (NTRS)
Hochhalter, Jacob; Leser, William P.; Newman, John A.; Gupta, Vipul K.; Yamakov, Vesselin; Cornell, Stephen R.; Willard, Scott A.; Heber, Gerd
2014-01-01
The research presented herein is a first step toward integrating two emerging structural health management paradigms: digital twin and sensory materials. Digital twin is an emerging life management and certification paradigm whereby models and simulations consist of as-built vehicle state, as-experienced loads and environments, and other vehicle-specific history to enable high-fidelity modeling of individual aerospace vehicles throughout their service lives. The digital twin concept spans many disciplines, and an extensive study on the full domain is out of the scope of this study. Therefore, as it pertains to the digital twin, this research focused on one major concept: modeling specifically the as-manufactured geometry of a component and its microstructure (to the degree possible). The second aspect of this research was to develop the concept of sensory materials such that they can be employed within the digital twin framework. Sensory materials are shape-memory alloys that undergo an audible phase transformation while experiencing sufficient strain. Upon embedding sensory materials with a structural alloy, this audible transformation helps improve the reliability of crack detection especially at the early stages of crack growth. By combining these two early-stage technologies, an automated approach to evidence-based inspection and maintenance of aerospace vehicles is sought.
Gold Nanoparticles-enabled Efficient Dual Delivery of Anticancer Therapeutics to HeLa Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farooq, Muhammad U.; Novosad, Valentyn; Rozhkova, Elena A.
Colloidal gold nanoparticles (AuNPs) are of interest as non-toxic carriers for drug delivery owing to their advanced properties, such as extensive surface-to-volume ratio and possibilities for tailoring their charge, hydrophilicity and functionality through surface chemistries. To date, various biocompatible polymers have been used for surface decoration of AuNPs to enhance their stability, payloads capacity and cellular uptake. This study describes a facile one-step method to synthesize stable AuNPs loaded with combination of two anticancer therapeutics, -bleomycin and doxorubicin. Anticancer activities, cytotoxicity, uptake and intracellular localization of the AuNPs were demonstrated in HeLa cells. We show that the therapeutic efficacy ofmore » the nanohybrid drug was strongly enhanced by the active targeting by the nanoscale delivery system to HeLa cells with a significant decrease of the half-maximal effective drug concentration, through blockage of HeLa cancer cell cycle. These results provide rationale for further progress of AuNPs-assisted combination chemotherapy using two drugs at optimized effective concentrations which act via different mechanisms thus decreasing possibilities of development of the cancer drug resistance, reduction of systemic drug toxicity and improvement of outcomes of chemotherapy.« less
Gold Nanoparticles-enabled Efficient Dual Delivery of Anticancer Therapeutics to HeLa Cells
Farooq, Muhammad U.; Novosad, Valentyn; Rozhkova, Elena A.; ...
2018-02-13
Colloidal gold nanoparticles (AuNPs) are of interest as non-toxic carriers for drug delivery owing to their advanced properties, such as extensive surface-to-volume ratio and possibilities for tailoring their charge, hydrophilicity and functionality through surface chemistries. To date, various biocompatible polymers have been used for surface decoration of AuNPs to enhance their stability, payloads capacity and cellular uptake. This study describes a facile one-step method to synthesize stable AuNPs loaded with combination of two anticancer therapeutics, -bleomycin and doxorubicin. Anticancer activities, cytotoxicity, uptake and intracellular localization of the AuNPs were demonstrated in HeLa cells. We show that the therapeutic efficacy ofmore » the nanohybrid drug was strongly enhanced by the active targeting by the nanoscale delivery system to HeLa cells with a significant decrease of the half-maximal effective drug concentration, through blockage of HeLa cancer cell cycle. These results provide rationale for further progress of AuNPs-assisted combination chemotherapy using two drugs at optimized effective concentrations which act via different mechanisms thus decreasing possibilities of development of the cancer drug resistance, reduction of systemic drug toxicity and improvement of outcomes of chemotherapy.« less
Enzyme-Free Replication with Two or Four Bases.
Richert, Clemens; Hänle, Elena
2018-05-20
All known forms of life encode their genetic information in a sequence of bases of a genetic polymer and produce copies of their genes via semiconservative replication. How this process started before polymerase enzymes had been evolved is unclear. Enzyme-free copying of short stretches of DNA or RNA sequence has been demonstrated, using activated nucleotides, but not replication. We have developed a methodology for replication. It involves extension with reversible termination, enzyme-free ligation, and strand capture and allowed us to monitor nucleotide incorporation for an entire helical turn of DNA, both during a first and a second round of copying. When tracking replication mass spectrometrically, we found that with all four bases (A/C/G/T) an 'error catastrophe' occurs, with the correct sequence being 'overwhelmed' by incorrect ones. When only C and G were used, approx. half of all daughter strands had the mass of the correct sequence after 20 nonenzymatic copying steps. We conclude that enzyme-free replication is more likely to be successful with the two strongly pairing bases, rather than all four bases of the genetic alphabet. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Andringa, Tjeerd C.; van den Bosch, Kirsten A.; Vlaskamp, Carla
2013-01-01
In this paper we connect open-ended development, authority, agency, and motivation through (1) an analysis of the demands of existing in a complex world and (2) environmental appraisal in terms of affordance content and the complexity to select appropriate behavior. We do this by identifying a coherent core from a wide range of contributing fields. Open-ended development is a structured three-step process in which the agent first learns to master the body and then aims to make the mind into a reliable tool. Preconditioned on success in step two, step three aims to effectively co-create an optimal living environment. We argue that these steps correspond to right-left-right hemispheric dominance, where the left hemisphere specializes in control and the right hemisphere in exploration. Control (e.g., problem solving) requires a closed and stable world that must be maintained by external authorities or, in step three, by the right hemisphere acting as internal authority. The three-step progression therefore corresponds to increasing autonomy and agency. Depending on how we appraise the environment, we formulate four qualitatively different motivational states: submission, control, exploration, and consolidation. Each of these four motivational states has associated reward signals of which the last three—successful control, discovery of novelty, and establishing new relations—form an open-ended development loop that, the more it is executed, helps the agent to become progressively more agentic and more able to co-create a pleasant-to-live-in world. We conclude that for autonomy to arise, the agent must exist in a (broad) transition region between order and disorder in which both danger and opportunity (and with that open-ended development and motivation) are defined. We conclude that a research agenda for artificial cognitive system research should include open-ended development through intrinsic motivations and ascribing more prominence to right hemispheric strengths. PMID:24155734
NASA Astrophysics Data System (ADS)
Shupla, Christine; Gladney, Alicia; Dalton, Heather; LaConte, Keliann; Truxillo, Jeannette; Shipp, Stephanie
2015-11-01
The Sustainable Trainer Engagement Program (STEP) is a modified train-the-trainer professional development program being conducted by the Lunar and Planetary Institute (LPI). STEP has provided two cohorts of 6-8th grade science specialists and lead teachers in the Houston region with in-depth Earth and Space Science (ESS) content, activities, and pedagogy over 15 days each, aligned with Texas science standards. This project has two over-arching goals: to improve middle school ESS instruction, and to create and test an innovative model for Train-the-Trainer.This poster will share details regarding STEP’s activities and resources, program achievements, and its main findings to date. STEP is being evaluated by external evaluators at the Research Institute of Texas, part of the Harris County Department of Education. External evaluation shows an increase after one year in STEP participants’ knowledge (cohort 1 showed a 10% increase; cohort 2 showed a 20% increase), confidence in teaching Earth and Space Science effectively (cohort 1 demonstrated a 10% increase; cohort 2 showed a 20% increase), and confidence in preparing other teachers (cohort 1 demonstrated a 12% increase; cohort 2 showed a 20% increase). By September 2015, STEP participants led (or assisted in leading) approximately 40 workshops for about 1800 science teachers in Texas. Surveys of teachers attending professional development conducted by STEP participants show very positive responses, with averages for conference workshop evaluations ranging from 3.6 on a 4 point scale, and other evaluations averaging from 4.1 to 5.0 on a 5 point scale.Main lessons for the team on the train-the-trainer model include: a lack of confidence by leaders in K-12 science education in presenting ESS professional development, difficulties in arranging for school or district content-specific professional development, the minimal duration of most school and district professional development sessions, and uncertainties in partnerships between scientists and educators.
Andringa, Tjeerd C; van den Bosch, Kirsten A; Vlaskamp, Carla
2013-01-01
In this paper we connect open-ended development, authority, agency, and motivation through (1) an analysis of the demands of existing in a complex world and (2) environmental appraisal in terms of affordance content and the complexity to select appropriate behavior. We do this by identifying a coherent core from a wide range of contributing fields. Open-ended development is a structured three-step process in which the agent first learns to master the body and then aims to make the mind into a reliable tool. Preconditioned on success in step two, step three aims to effectively co-create an optimal living environment. We argue that these steps correspond to right-left-right hemispheric dominance, where the left hemisphere specializes in control and the right hemisphere in exploration. Control (e.g., problem solving) requires a closed and stable world that must be maintained by external authorities or, in step three, by the right hemisphere acting as internal authority. The three-step progression therefore corresponds to increasing autonomy and agency. Depending on how we appraise the environment, we formulate four qualitatively different motivational states: submission, control, exploration, and consolidation. Each of these four motivational states has associated reward signals of which the last three-successful control, discovery of novelty, and establishing new relations-form an open-ended development loop that, the more it is executed, helps the agent to become progressively more agentic and more able to co-create a pleasant-to-live-in world. We conclude that for autonomy to arise, the agent must exist in a (broad) transition region between order and disorder in which both danger and opportunity (and with that open-ended development and motivation) are defined. We conclude that a research agenda for artificial cognitive system research should include open-ended development through intrinsic motivations and ascribing more prominence to right hemispheric strengths.
Analytical condition inspection and extension of time between overhaul of F3-30 engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakao, M.; Ikeyama, M.; Abe, S.
1992-04-01
F3-30 is the low-bypass-ratio turbofan engine developed to power the T-4 intermediate trainer for the Japan Air Self Defense Force (JASDF). The actual field service was started in Sept., 1988. This paper reports on the program to extend time between overhaul (TBO) of the F3-30 which has been running. Analytical condition inspection (ACI) and accelerated mission testing (AMT) were conducted to confirm sufficient durability to extend TBO. Most deteriorations of parts and performance due to AMT were also found by ACI after field operation with approximately the same deterioration rate. On the other hand, some deteriorations were found by ACImore » only. These results show that ACI after field operation is also necessary to confirm the TBO extension, although AMT simulates the deterioration in field operations very well. The deteriorations that would be caused by the field operation during one extended-TBO were estimated with the results of ACI and AMT, and it was concluded that the F3-30 has sufficient durability for TBO extension to the next step.« less
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2012-01-01
Among the key recommendations of a recent WCRP Workshop on Drought Predictability and Prediction in a Changing Climate is the development of an experimental global drought information system (GDIS). The timeliness of such an effort is evidenced by the wide aITay of relevant ongoing national and international (as well as regional and continental scale) efforts to provide drought information, including the US and North American drought monitors, and various integrating activities such as GEO and the Global Drought Portal. The workshop will review current capabilities and needs, and focus on the steps necessary to develop a GDIS that will build upon the extensive worldwide investments that have already been made in developing drought monitoring (including new space-based observations), drought risk management, and climate prediction capahilities.
NASA Astrophysics Data System (ADS)
Obarski, Kelly Josephine
Each year, hundreds of graduate and undergraduate students, participate as Fellows in National Science Foundation GK-12 Grants throughout the U.S. These Fellowships create opportunities for university students to improve their communication skills, teaching proficiencies, and team-building skills, in addition to expanding their interest in educational endeavors in their respective communities while pursuing their college degrees. STEP (Science and Technology Enhancement Project) is one such project. University faculty, public school teachers, and community leaders collaborated together in order to bring scientists into middle and secondary classrooms to focus on increasing student interest and proficiency in science, technology, engineering, and mathematics (STEM) skills. Seventeen Fellows, in the previous four years, designed, developed, and implemented innovative, hands-on lessons in seven local schools. The evaluation team collected a tremendous amount of research evidence focused on the effect of the program on the Fellows while they were participants in the study, but there has been very little data collected about the Fellows after leaving the program. This research study, consisting of two-hour interviews, qualitatively explores how the skills learned while participating in the STEP program affected the Fellows' career and educational choices once leaving the project. This data was analyzed along with historical attitude surveys and yearly tracking documents to determine the effect that participation in the program had on their choices post-STEP. An extensive literature review has been conducted focusing on other GK-12 programs throughout the country, K-16 collaboration, Preparing Future Faculty Programs, as well as on teaching and learning literature. These bodies of literature provide the theoretical basis in which the research is framed in order to assess the impact on Fellow educational and professional choices since leaving the STEP program. This research project sheds new light on how participation in a GK-12 Fellowship impacts career and educational choices after the Fellow leaves the program.
Interactive real time flow simulations
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1990-01-01
An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.
Sorting by Cuts, Joins, and Whole Chromosome Duplications.
Zeira, Ron; Shamir, Ron
2017-02-01
Genome rearrangement problems have been extensively studied due to their importance in biology. Most studied models assumed a single copy per gene. However, in reality, duplicated genes are common, most notably in cancer. In this study, we make a step toward handling duplicated genes by considering a model that allows the atomic operations of cut, join, and whole chromosome duplication. Given two linear genomes, [Formula: see text] with one copy per gene and [Formula: see text] with two copies per gene, we give a linear time algorithm for computing a shortest sequence of operations transforming [Formula: see text] into [Formula: see text] such that all intermediate genomes are linear. We also show that computing an optimal sequence with fewest duplications is NP-hard.
Supersymmetric extensions of K field theories
NASA Astrophysics Data System (ADS)
Adam, C.; Queiruga, J. M.; Sanchez-Guillen, J.; Wereszczynski, A.
2012-02-01
We review the recently developed supersymmetric extensions of field theories with non-standard kinetic terms (so-called K field theories) in two an three dimensions. Further, we study the issue of topological defect formation in these supersymmetric theories. Specifically, we find supersymmetric K field theories which support topological kinks in 1+1 dimensions as well as supersymmetric extensions of the baby Skyrme model for arbitrary nonnegative potentials in 2+1 dimensions.
fMRI capture of auditory hallucinations: Validation of the two-steps method.
Leroy, Arnaud; Foucher, Jack R; Pins, Delphine; Delmaire, Christine; Thomas, Pierre; Roser, Mathilde M; Lefebvre, Stéphanie; Amad, Ali; Fovet, Thomas; Jaafari, Nemat; Jardri, Renaud
2017-10-01
Our purpose was to validate a reliable method to capture brain activity concomitant with hallucinatory events, which constitute frequent and disabling experiences in schizophrenia. Capturing hallucinations using functional magnetic resonance imaging (fMRI) remains very challenging. We previously developed a method based on a two-steps strategy including (1) multivariate data-driven analysis of per-hallucinatory fMRI recording and (2) selection of the components of interest based on a post-fMRI interview. However, two tests still need to be conducted to rule out critical pitfalls of conventional fMRI capture methods before this two-steps strategy can be adopted in hallucination research: replication of these findings on an independent sample and assessment of the reliability of the hallucination-related patterns at the subject level. To do so, we recruited a sample of 45 schizophrenia patients suffering from frequent hallucinations, 20 schizophrenia patients without hallucinations and 20 matched healthy volunteers; all participants underwent four different experiments. The main findings are (1) high accuracy in reporting unexpected sensory stimuli in an MRI setting; (2) good detection concordance between hypothesis-driven and data-driven analysis methods (as used in the two-steps strategy) when controlled unexpected sensory stimuli are presented; (3) good agreement of the two-steps method with the online button-press approach to capture hallucinatory events; (4) high spatial consistency of hallucinatory-related networks detected using the two-steps method on two independent samples. By validating the two-steps method, we advance toward the possible transfer of such technology to new image-based therapies for hallucinations. Hum Brain Mapp 38:4966-4979, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The first steps towards a de minimus, affordable NEA exploration architecture
NASA Astrophysics Data System (ADS)
Landis, Rob R.; Abell, Paul A.; Adamo, Daniel R.; Barbee, Brent W.; Johnson, Lindley N.
2013-03-01
The impetus for asteroid exploration is scientific, political, and pragmatic. The notion of sending human explorers to asteroids is not new. Piloted missions to these primitive bodies were first discussed in the 1960s, pairing Saturn V rockets with enhanced Apollo spacecraft to explore what were then called "Earth-approaching asteroids." Two decades ago, NASA's Space Exploration Initiative (SEI) also briefly examined the possibility of visiting these small celestial bodies. Most recently, the US Human Space Flight Review Committee (the second Augustine Commission) suggested that near-Earth objects (NEOs) represent a target-rich environment for exploration via the "Flexible Path" option. However, prior to seriously considering human missions to NEOs, it has become clear that we currently lack a robust catalog of human-accessible targets. The majority of the known NEOs identified by a study team across several NASA centers as "human-accessible" are probably too small and have orbits that are too uncertain to consider mounting piloted expeditions to these small worlds. The first step in developing a comprehensive catalog is, therefore, to complete a space-based NEO survey. The resulting catalog of candidate NEOs would then be transformed into a matrix of opportunities for robotic and human missions for the next several decades and shared with the international community. This initial step of a space-based NEO survey is therefore the linchpin to laying the foundation of a low-risk architecture to venture out and explore these primitive bodies. We suggest such a minimalist framework architecture from (1) extensive ground-based and precursor spacecraft investigations (while applying operational knowledge from science-driven robotic missions), (2) astronaut servicing of spacecraft operating at geosynchronous Earth orbit to retain essential skills and experience, and (3) applying the sum of these skills, knowledge and experience to piloted missions to NEOs.
Direct numerical simulations of magmatic differentiation at the microscopic scale
NASA Astrophysics Data System (ADS)
Sethian, J.; Suckale, J.; Elkins-Tanton, L. T.
2010-12-01
A key question in the context of magmatic differentiation and fractional crystallization is the ability of crystals to decouple from the ambient fluid and sink or rise. Field data indicates a complex spectrum of behavior ranging from rapid sedimentation to continued entrainment. Theoretical and laboratory studies paint a similarly rich picture. The goal of this study is to provide a detailed numerical assessment of the competing effects of sedimentation and entrainment at the scale of individual crystals. The decision to simulate magmatic differentiation at the grain scale comes at the price of not being able to simultaneously solve for the convective velocity field at the macroscopic scale, but has the crucial advantage of enabling us to fully resolve the dynamics of the systems from first principles without requiring any simplifying assumptions. The numerical approach used in this study is a customized computational methodology developed specifically for simulations of solid-fluid coupling in geophysical systems. The algorithm relies on a two-step projection scheme: In the first step, we solve the multiple-phase Navier-Stokes or Stokes equation in both domains. In the second step, we project the velocity field in the solid domain onto a rigid-body motion by enforcing that the deformation tensor in the respective domain is zero. This procedure is also used to enforce the no-slip boundary-condition on the solid-fluid interface. We have extensively validated and benchmarked the method. Our preliminary results indicate that, not unexpectedly, the competing effects of sedimentation and entrainment depend sensitively on the size distribution of the crystals, the aspect ratio of individual crystals and the vigor of the ambient flow field. We provide a detailed scaling analysis and quantify our results in terms of the relevant non-dimensional numbers.
Hardarson, Thorir; Bungum, Mona; Conaghan, Joe; Meintjes, Marius; Chantilis, Samuel J; Molnar, Laszlo; Gunnarsson, Kristina; Wikland, Matts
2015-12-01
To study whether a culture medium that allows undisturbed culture supports human embryo development to the blastocyst stage equivalently to a well-established sequential media. Randomized, double-blinded sibling trial. Independent in vitro fertilization (IVF) clinics. One hundred twenty-eight patients, with 1,356 zygotes randomized into two study arms. Embryos randomly allocated into two study arms to compare embryo development on a time-lapse system using a single-step medium or sequential media. Percentage of good-quality blastocysts on day 5. Percentage of day 5 good-quality blastocysts was 21.1% (standard deviation [SD] ± 21.6%) and 22.2% (SD ± 22.1%) in the single-step time-lapse medium (G-TL) and the sequential media (G-1/G-2) groups, respectively. The mean difference (-1.2; 95% CI, -6.0; 3.6) between the two media systems for the primary end point was less than the noninferiority margin of -8%. There was a statistically significantly lower number of good-quality embryos on day 3 in the G-TL group [50.7% (SD ± 30.6%) vs. 60.8% (SD ± 30.7%)]. Four out of the 11 measured morphokinetic parameters were statistically significantly different for the two media used. The mean levels of ammonium concentration in the media at the end of the culture period was statistically significantly lower in the G-TL group as compared with the G-2 group. We have shown that a single-step culture medium supports blastocyst development equivalently to established sequential media. The ammonium concentrations were lower in the single-step media, and the measured morphokinetic parameters were modified somewhat. NCT01939626. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rindskopf, David
2012-01-01
Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…
Development of a Production Ready Automated Wire Delivery System
NASA Technical Reports Server (NTRS)
1997-01-01
The current development effort is a Phase 3 research study entitled "A Production Ready Automated Wire Delivery System", contract number NAS8-39933, awarded to Nichols Research Corporation (NRC). The goals of this research study were to production harden the existing Automated Wire Delivery (AWDS) motion and sensor hardware and test the modified AWDS in a range of welding applications. In addition, the prototype AWDS controller would be moved to the VME bus platform by designing, fabricating and testing a single board VME bus AWDS controller. This effort was to provide an AWDS that could transition from the laboratory environment to production operations. The project was performed in two development steps. Step 1 modified and tested an improved MWG. Step 2 developed and tested the AWDS single board VME bus controller. Step 3 installed the Wire Pilot in a Weld Controller with the imbedded VME bus controller.
A Polarimetric Extension of the van Cittert-Zernike Theorem for Use with Microwave Interferometers
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Simon, N. K.
2004-01-01
The van Cittert-Zernike theorem describes the Fourier-transform relationship between an extended source and its visibility function. Developments in classical optics texts use scalar field formulations for the theorem. Here, we develop a polarimetric extension to the van Cittert-Zernike theorem with applications to passive microwave Earth remote sensing. The development provides insight into the mechanics of two-dimensional interferometric imaging, particularly the effects of polarization basis differences between the scene and the observer.
Advanced Monobore Concept, Development of CFEX Self-Expanding Tubular Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Spray
2007-09-30
The Advanced Monobore Concept--CFEX{copyright} Self-Expanding Tubular Technology Development was a successfully executed fundamental research through field demonstration project. This final report is presented as a progression, according to basic technology development steps. For this project, the research and development steps used were: concept development, engineering analysis, manufacturing, testing, demonstration, and technology transfer. The CFEX{copyright} Technology Development--Advanced Monobore Concept Project successfully completed all of the steps for technology development, covering fundamental research, conceptual development, engineering design, advanced-level prototype construction, mechanical testing, and downhole demonstration. Within an approximately two year period, a partially defined, broad concept was evolved into a substantial newmore » technological area for drilling and production engineering applicable a variety of extractive industries--which was also successfully demonstrated in a test well. The demonstration achievement included an actual mono-diameter placement of two self-expanding tubulars. The fundamental result is that an economical and technically proficient means of casing any size of drilling or production well or borehole is indicated as feasible based on the results of the project. Highlighted major accomplishments during the project's Concept, Engineering, Manufacturing, Demonstration, and Technology Transfer phases, are given.« less
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
Grant, Alison D; Coetzee, Leonie; Fielding, Katherine L; Lewis, James J; Ntshele, Smanga; Luttig, Mariëtha M; Mngadi, Kathryn T; Muller, Dorothy; Popane, Flora; Mdluli, John; Mngadi, Nkosinathi; Sefuthi, Clement; Clark, David A; Churchyard, Gavin
2010-11-01
To describe a programme of community education and mobilization to promote uptake in a cluster-randomized trial of tuberculosis preventive therapy offered to all members of intervention clusters. Gold mines in South Africa, where tuberculosis incidence is extremely high, despite conventional control measures. All employees in intervention clusters (mine shaft and associated hostel) were invited to enrol. Cumulative enrolment in the study in intervention clusters. Key steps in communicating information relevant to the study included extensive consultation with key stakeholders; working with a communication company to develop a project 'brand'; developing a communication strategy tailored to each intervention site; and involving actors from a popular television comedy series to help inform communities about the study. One-to-one communications used peer educators along with study staff, and participant advisory groups facilitated two-way communication between study staff and participants. By contrast, treatment 'buddies' and text messaging to promote adherence proved less successful. Mean cumulative enrolment in the first four intervention clusters was 61.9%, increasing to 83.0% in the final four clusters. A tailored communication strategy can facilitate a high level of enrolment in a community health intervention.
Munasinghe, M. Nalaka; Stephen, Craig; Abeynayake, Preeni; Abeygunawardena, Indra S.
2010-01-01
Shrimp farming has great potential to diversify and secure income in rural Sri Lanka, but production has significantly declined in recent years due to civil conflicts, some unsustainable practices and devastating outbreaks of disease. We examined management practices affecting disease prevention and control in the Puttalam district to identify extension services outputs that could support sustainable development of Sri Lankan shrimp farming. A survey on 621 shrimp farms (603 operational and 18 nonoperational) was conducted within the Puttalam district over 42 weeks comprising a series of three-day field visits from August 2008 to October 2009, covering two consecutive shrimp crops. Fundamental deficits in disease control, management, and biosecurity practices were found. Farmers had knowledge of biosecurity but the lack of financial resources was a major impediment to improved disease control. Smallholder farmers were disproportionately constrained in their ability to enact basic biosecurity practices due to their economic status. Basic breaches in biosecurity will keep disease as the rate limiting step in this industry. Plans to support this industry must recognize the socioeconomic reality of rural Sri Lankan aquaculture. PMID:20847956
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Fu; Leyffer, Sven; Munson, Todd
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
Step-by-step growth of complex oxide microstructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datskos, Panos G.; Cullen, David A.; Sharma, Jaswinder K.
The synthesis of complex and hybrid oxide microstructures is of fundamental interest and practical applications. However, the design and synthesis of such structures is a challenging task. A solution-phase process to synthesize complex silica and silica-titania hybrid microstructures was developed by exploiting the emulsion-droplet-based step-by-step growth featuring shape control. Lastly, the strategy is robust and can be extended to the preparation of complex hybrid structures consisting of two or more materials, with each having its own shape.
Step-by-step growth of complex oxide microstructures
Datskos, Panos G.; Cullen, David A.; Sharma, Jaswinder K.
2015-06-10
The synthesis of complex and hybrid oxide microstructures is of fundamental interest and practical applications. However, the design and synthesis of such structures is a challenging task. A solution-phase process to synthesize complex silica and silica-titania hybrid microstructures was developed by exploiting the emulsion-droplet-based step-by-step growth featuring shape control. Lastly, the strategy is robust and can be extended to the preparation of complex hybrid structures consisting of two or more materials, with each having its own shape.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This manual is a "how to" training device for developing inventory records in the AppleWorks program using an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 17 figures depicting the computer screen at the various stages of the inventory…
If we build it, will they come? Curation and use of the ESO telescope bibliography
NASA Astrophysics Data System (ADS)
Grothkopf, Uta; Meakins, Silvia; Bordelon, Dominic
2015-12-01
The ESO Telescope Bibliography (telbib) is a database of refereed papers published by the ESO users community. It links data in the ESO Science Archive with the published literature, and vice versa. Developed and maintained by the ESO library, telbib also provides insights into the organization's research output and impact as measured through bibliometric studies. Curating telbib is a multi-step process that involves extensive tagging of the database records. Based on selected use cases, this talk will explain how the rich metadata provide parameters for reports and statistics in order to investigate the performance of ESO's facilities and to understand trends and developments in the publishing behaviour of the user community.
Paralysis recovery in humans and model systems
NASA Technical Reports Server (NTRS)
Edgerton, V. Reggie; Roy, Roland R.
2002-01-01
Considerable evidence now demonstrates that extensive functional and anatomical reorganization following spinal cord injury occurs in centers of the brain that have some input into spinal motor pools. This is very encouraging, given the accumulating evidence that new connections formed across spinal lesions may not be initially functionally useful. The second area of advancement in the field of paralysis recovery is in the development of effective interventions to counter axonal growth inhibition. A third area of significant progress is the development of robotic devices to quantify the performance level of motor tasks following spinal cord injury and to 'teach' the spinal cord to step and stand. Advances are being made with robotic devices for mice, rats and humans.