Bhateria, Manisha; Rachumallu, Ramakrishna; Singh, Rajbir; Bhatta, Rabi Sankar
2014-08-01
Erythrocytes (red blood cells [RBCs]) and artificial or synthetic delivery systems such as liposomes, nanoparticles (NPs) are the most investigated carrier systems. Herein, progress made from conventional approach of using RBC as delivery systems to novel approach of using synthetic delivery systems based on RBC properties will be reviewed. We aim to highlight both conventional and novel approaches of using RBCs as potential carrier system. Conventional approaches which include two main strategies are: i) directly loading therapeutic moieties in RBCs; and ii) coupling them with RBCs whereas novel approaches exploit structural, mechanical and biological properties of RBCs to design synthetic delivery systems through various engineering strategies. Initial attempts included coupling of antibodies to liposomes to specifically target RBCs. Knowledge obtained from several studies led to the development of RBC membrane derived liposomes (nanoerythrosomes), inspiring future application of RBC or its structural features in other attractive delivery systems (hydrogels, filomicelles, microcapsules, micro- and NPs) for even greater potential. In conclusion, this review dwells upon comparative analysis of various conventional and novel engineering strategies in developing RBC based drug delivery systems, diversifying their applications in arena of drug delivery. Regardless of the challenges in front of us, RBC based delivery systems offer an exciting approach of exploiting biological entities in a multitude of medical applications.
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.
1976-01-01
Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.
Okamoto, Takuma; Sakaguchi, Atsushi
2017-03-01
Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.
Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benmakhlouf, H; Kraepelien, T; Forander, P
2014-06-01
Purpose: Most Gamma knife treatments are based solely on MR-images. However, for fractionated treatments and to implement TPS dose calculations that require electron densities, CT image data is essential. The purpose of this work is to assess the dosimetric effects of using MR-images registered with stereotactic CT-images in Gamma knife treatments. Methods: Twelve patients treated for vestibular schwannoma with Gamma Knife Perfexion (Elekta Instruments, Sweden) were selected for this study. The prescribed doses (12 Gy to periphery) were delivered based on the conventional approach of using stereotactic MR-images only. These plans were imported into stereotactic CT-images (by registering MR-images withmore » stereotactic CT-images using the Leksell gamma plan registration software). The dose plans, for each patient, are identical in both cases except for potential rotations and translations resulting from the registration. The impact of the registrations was assessed by an algorithm written in Matlab. The algorithm compares the dose-distributions voxel-by-voxel between the two plans, calculates the full dose coverage of the target (treated in the conventional approach) achieved by the CT-based plan, and calculates the minimum dose delivered to the target (treated in the conventional approach) achieved by the CT-based plan. Results: The mean dose difference between the plans was 0.2 Gy to 0.4 Gy (max 4.5 Gy) whereas between 89% and 97% of the target (treated in the conventional approach) received the prescribed dose, by the CT-plan. The minimum dose to the target (treated in the conventional approach) given by the CT-based plan was between 7.9 Gy and 10.7 Gy (compared to 12 Gy in the conventional treatment). Conclusion: The impact of using MR-images registered with stereotactic CT-images has successfully been compared to conventionally delivered dose plans showing significant differences between the two. Although CTimages have been implemented clinically; the effect of the registration has not been fully investigated.« less
ERIC Educational Resources Information Center
Wilson, Keithia; Fowler, Jane
2005-01-01
This study investigated whether students' approaches to learning were influenced by the design of university courses. Pre- and post-evaluations of the approaches to learning of the same group of students concurrently enrolled in a conventional course (lectures and tutorials) and an action learning-based course (project work, learning groups) were…
A Progressive Approach to Discrete Trial Teaching: Some Current Guidelines
ERIC Educational Resources Information Center
Leaf, Justin B.; Cihon, Joseph H.; Leaf, Ronald; McEachin, John; Taubman, Mitchell
2016-01-01
Discrete trial teaching (DTT) is one of the cornerstones of applied behavior analysis (ABA) based interventions. Conventionally, DTT is commonly implemented within a prescribed, fixed manner in which the therapist is governed by a strict set of rules. In contrast to conventional DTT, a progressive approach to DTT allows the therapist to remain…
A motion-constraint logic for moving-base simulators based on variable filter parameters
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.
1974-01-01
A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Nanoporous films: From conventional to the conformal
Allendorf, Mark D.; Stavila, Vitalie
2015-12-14
Here, thin and continuous films of porous metal-organic frameworks can now be conformally deposited on various substrates using a vapor-phase synthesis approach that departs from conventional solution-based routes.
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
Chellemi, D O; Gamliel, A; Katan, J; Subbarao, K V
2016-03-01
Biological suppression of soilborne diseases with minimal use of outside interventive actions has been difficult to achieve in high input conventional crop production systems due to the inherent risk of pest resurgence. This review examines previous approaches to the management of soilborne disease as precursors to the evolution of a systems-based approach, in which plant disease suppression through natural biological feedback mechanisms in soil is incorporated into the design and operation of cropping systems. Two case studies are provided as examples in which a systems-based approach is being developed and deployed in the production of high value crops: lettuce/strawberry production in the coastal valleys of central California (United States) and sweet basil and other herb crop production in Israel. Considerations for developing and deploying system-based approaches are discussed and operational frameworks and metrics to guide their development are presented with the goal of offering a credible alternative to conventional approaches to soilborne disease management.
Traditional and New Influenza Vaccines
Wong, Sook-San
2013-01-01
SUMMARY The challenges in successful vaccination against influenza using conventional approaches lie in their variable efficacy in different age populations, the antigenic variability of the circulating virus, and the production and manufacturing limitations to ensure safe, timely, and adequate supply of vaccine. The conventional influenza vaccine platform is based on stimulating immunity against the major neutralizing antibody target, hemagglutinin (HA), by virus attenuation or inactivation. Improvements to this conventional system have focused primarily on improving production and immunogenicity. Cell culture, reverse genetics, and baculovirus expression technology allow for safe and scalable production, while adjuvants, dose variation, and alternate routes of delivery aim to improve vaccine immunogenicity. Fundamentally different approaches that are currently under development hope to signal new generations of influenza vaccines. Such approaches target nonvariable regions of antigenic proteins, with the idea of stimulating cross-protective antibodies and thus creating a “universal” influenza vaccine. While such approaches have obvious benefits, there are many hurdles yet to clear. Here, we discuss the process and challenges of the current influenza vaccine platform as well as new approaches that are being investigated based on the same antigenic target and newer technologies based on different antigenic targets. PMID:23824369
Treating refractory obsessive-compulsive disorder: what to do when conventional treatment fails?
Franz, Adelar Pedro; Paim, Mariana; Araújo, Rafael Moreno de; Rosa, Virgínia de Oliveira; Barbosa, Ísis Mendes; Blaya, Carolina; Ferrão, Ygor Arzeno
2013-01-01
Obsessive-compulsive disorder (OCD) is a chronic and impairing condition. A very small percentage of patients become asymptomatic after treatment. The purpose of this paper was to review the alternative therapies available for OCD when conventional treatment fails. Data were extracted from controlled clinical studies (evidence-based medicine) published on the MEDLINE and Science Citation Index/Web of Science databases between 1975 and 2012. Findings are discussed and suggest that clinicians dealing with refractory OCD patients should: 1) review intrinsic phenomenological aspects of OCD, which could lead to different interpretations and treatment choices; 2) review extrinsic phenomenological aspects of OCD, especially family accommodation, which may be a risk factor for non-response; 3) consider non-conventional pharmacological approaches; 4) consider non-conventional psychotherapeutic approaches; and 5) consider neurobiological approaches.
Dubey, Sumit M; Gole, Vitthal L; Gogate, Parag R
2015-03-01
The present work reports the intensification aspects for the synthesis of fatty acid methyl esters (FAME) from a non-edible high acid value Nagchampa oil (31 mg of KOH/g of oil) using two stage acid esterification (catalyzed by H₂SO₄) followed by transesterification in the presence of heterogeneous catalyst (CaO). Intensification aspects of both stages have been investigated using sonochemical reactors and the obtained degree of intensification has been established by comparison with the conventional approach based on mechanical agitation. It has been observed that reaction temperature for esterification reduced from 65 to 40 °C for the ultrasonic approach whereas there was a significant reduction in the optimum reaction time for transesterification from 4h for the conventional approach to 2.5h for the ultrasound assisted approach. Also the reaction temperature reduced marginally from 65 to 60 °C and yield increased from 76% to 79% for the ultrasound assisted approach. Energy requirement and activation energy for both esterification and transesterification was lower for the ultrasound based approach as compared to the conventional approach. The present work has clearly established the intensification obtained due to the use of ultrasound and also illustrated the two step approach for the synthesis of FAME from high acid value feedstock based on the use of heterogeneous catalyst for the transesterification step. Copyright © 2014 Elsevier B.V. All rights reserved.
A novel task-oriented optimal design for P300-based brain-computer interfaces.
Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen
2014-10-01
Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.
A novel task-oriented optimal design for P300-based brain-computer interfaces
NASA Astrophysics Data System (ADS)
Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen
2014-10-01
Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.
Bidirectional composition on lie groups for gradient-based image alignment.
Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick
2010-09-01
In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.
'Vague Oviedo': autonomy, culture and the case of previously competent patients.
Pascalev, Assya; Vidalis, Takis
2010-03-01
The paper examines the ethical and legal challenges of making decisions for previously competent patients and the role of advance directives and legal representatives in light of the Oviedo Convention. The paper identifies gaps in the Convention that result in conflicting instructions in cases of a disagreement between the expressed prior wishes of a patient, and the legal representative. The authors also examine the legal and moral status of informally expressed prior wishes of patients unable to consent. The authors argue that positivist legal reasoning is insufficient for a consistent interpretation of the relevant provisions of the Convention and argue that ethical argumentation is needed to provide guidance in such cases. Based on the ethical arguments, the authors propose a way of reconciling the apparent inconsistencies in the Oviedo Convention. They advance a culturally sensitive approach to the application of the Convention at the national level. This approach understands autonomy as a broader, relational consent and emphasizes the social and cultural embeddedness of the individual. Based on their approach, the authors argue that there exists a moral obligation to respect the prior wishes of the patient even in countries without advance directives. Yet it should be left to the national legislations to determine the extent of this obligation and its concrete forms.
Using machine learning to assess covariate balance in matching studies.
Linden, Ariel; Yarnold, Paul R
2016-12-01
In order to assess the effectiveness of matching approaches in observational studies, investigators typically present summary statistics for each observed pre-intervention covariate, with the objective of showing that matching reduces the difference in means (or proportions) between groups to as close to zero as possible. In this paper, we introduce a new approach to distinguish between study groups based on their distributions of the covariates using a machine-learning algorithm called optimal discriminant analysis (ODA). Assessing covariate balance using ODA as compared with the conventional method has several key advantages: the ability to ascertain how individuals self-select based on optimal (maximum-accuracy) cut-points on the covariates; the application to any variable metric and number of groups; its insensitivity to skewed data or outliers; and the use of accuracy measures that can be widely applied to all analyses. Moreover, ODA accepts analytic weights, thereby extending the assessment of covariate balance to any study design where weights are used for covariate adjustment. By comparing the two approaches using empirical data, we are able to demonstrate that using measures of classification accuracy as balance diagnostics produces highly consistent results to those obtained via the conventional approach (in our matched-pairs example, ODA revealed a weak statistically significant relationship not detected by the conventional approach). Thus, investigators should consider ODA as a robust complement, or perhaps alternative, to the conventional approach for assessing covariate balance in matching studies. © 2016 John Wiley & Sons, Ltd.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
GESFIDE-PROPELLER approach for simultaneous R2 and R2* measurements in the abdomen.
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C
2013-12-01
To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There was no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p=0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p<0.001). The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. © 2013 Elsevier Inc. All rights reserved.
GESFIDE-PROPELLER Approach for Simultaneous R2 and R2* Measurements in the Abdomen
Jin, Ning; Guo, Yang; Zhang, Zhuoli; Zhang, Longjiang; Lu, Guangming; Larson, Andrew C.
2013-01-01
Purpose To investigate the feasibility of combining GESFIDE with PROPELLER sampling approaches for simultaneous abdominal R2 and R2* mapping. Materials and Methods R2 and R2* measurements were performed in 9 healthy volunteers and phantoms using the GESFIDE-PROPELLER and the conventional Cartesian-sampling GESFIDE approaches. Results Images acquired with the GESFIDE-PROPELLER sequence effectively mitigated the respiratory motion artifacts, which were clearly evident in the images acquired using the conventional GESFIDE approach. There were no significant difference between GESFIDE-PROPELLER and reference MGRE R2* measurements (p = 0.162) whereas the Cartesian-sampling based GESFIDE methods significantly overestimated R2* values compared to MGRE measurements (p < 0.001). Conclusion The GESFIDE-PROPELLER sequence provided high quality images and accurate abdominal R2 and R2* maps while avoiding the motion artifacts common to the conventional Cartesian-sampling GESFIDE approaches. PMID:24041478
ERIC Educational Resources Information Center
Hovelja, Tomaž; Vavpotic, Damjan; Žvanut, Boštjan
2016-01-01
The evaluation of e-learning and conventional pedagogical activities in nursing programmes has focused either on a single pedagogical activity or the entire curriculum, and only on students' or teachers' perspective. The goal of this study was to design and test a novel approach for evaluation of e-learning and conventional pedagogical activities…
Mars entry guidance based on an adaptive reference drag profile
NASA Astrophysics Data System (ADS)
Liang, Zixuan; Duan, Guangfei; Ren, Zhang
2017-08-01
The conventional Mars entry tracks a fixed reference drag profile (FRDP). To improve the landing precision, a novel guidance approach that utilizes an adaptive reference drag profile (ARDP) is presented. The entry flight is divided into two phases. For each phase, a family of drag profiles corresponding to various trajectory lengths is planned. Two update windows are investigated for the reference drag profile. At each window, the ARDP is selected online from the profile database according to the actual range-to-go. The tracking law for the selected drag profile is designed based on the feedback linearization. Guidance approaches using the ARDP and the FRDP are then tested and compared. Simulation results demonstrate that the proposed ARDP approach achieves much higher guidance precision than the conventional FRDP approach.
ERIC Educational Resources Information Center
Du, Bin; Yang, Xuesong
2017-01-01
In recent decades, traditional pathology education methodologies have been noticeably affected by new teaching approaches, including problem-based learning (PBL) and team-based learning (TBL). However, lack of outcome-based studies has hindered the extensive application of the TBL approach in the teaching of pathology in Chinese medical schools.…
Negative Transference Numbers in Polymer Electrolytes
NASA Astrophysics Data System (ADS)
Pesko, Danielle; Timachova, Ksenia; Balsara, Nitash
Energy density and safety of conventional lithium-ion batteries is limited by the use of flammable organic liquids as a solvent for lithium salts. Polymer electrolytes have the potential to address both limitations. The poor performance of batteries with polymer electrolytes is generally attributed to low ionic conductivity. The purpose of our work is to show that another transport property, the cation transference number, t +, of polymer electrolytes is fundamentally different from that of conventional electrolytes. Our experimental approach, based on concentrated solution theory, indicates that t + of mixtures of poly(ethylene oxide) and LiTFSI salt are negative over most of the accessible concentration window. In contrast, approaches based on dilute solution theory suggest that t + in the same system is positive. In addition to presenting a new approach for determining t +, we also present data obtained from the steady-state current method, pulsed-field-gradient NMR, and the current-interrupt method. Discrepancies between different approaches are resolved. Our work implies that in the absence of concentration gradients, the net fluxes of both cations and anions are directed toward the positive electrode. Conventional liquid electrolytes do not suffer from this constraint.
ERIC Educational Resources Information Center
Akman, Özkan; Alagöz, Bülent
2018-01-01
Form of inquiry should be based on cognitive approach, student-centered, question and inquiry-based, free of memorization and focused on high-level cognitive skills (critical-creative thinking and problem-solving) rather than conventional teacher-centered teaching and learning based on memorization and behavioral approach. The life quality of…
Farming Approaches for Greater Biodiversity, Livelihoods, and Food Security.
Garibaldi, Lucas A; Gemmill-Herren, Barbara; D'Annolfo, Raffaele; Graeub, Benjamin E; Cunningham, Saul A; Breeze, Tom D
2017-01-01
Scientists and policy-makers globally are calling for alternative approaches to conventional intensification of agriculture that enhance ecosystem services provided by biodiversity. The evidence reviewed here suggests that alternative approaches can achieve high crop yields and profits, but the performance of other socioeconomic indicators (as well as long-term trends) is surprisingly poorly documented. Consequently, the implementation of conventional intensification and the discussion of alternative approaches are not based on quantitative evidence of their simultaneous ecological and socioeconomic impacts across the globe. To close this knowledge gap, we propose a participatory assessment framework. Given the impacts of conventional intensification on biodiversity loss and greenhouse gas emissions, such evidence is urgently needed to direct science-policy initiatives, such as the United Nations (UN) 2030 Agenda for Sustainable Development. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.
2009-01-01
A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.
Lai, Ying-Hui; Chen, Fei; Wang, Syu-Siang; Lu, Xugang; Tsao, Yu; Lee, Chin-Hui
2017-07-01
In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research. Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effective in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations. Following this success with normal speech, this paper further investigated the performance of DDAE-based NR to improve the intelligibility of envelope-based vocoded speech, which simulates speech signal processing in existing CI devices. We compared the performance of speech intelligibility between DDAE-based NR and conventional single-microphone NR approaches using the noise vocoder simulation. The results of both objective evaluations and listening test showed that, under the conditions of nonstationary noise distortion, DDAE-based NR yielded higher intelligibility scores than conventional NR approaches. This study confirmed that DDAE-based NR could potentially be integrated into a CI processor to provide more benefits to CI users under noisy conditions.
Stringent DDI-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.
Zhou, Hufeng; Rezaei, Javad; Hugo, Willy; Gao, Shangzhi; Jin, Jingjing; Fan, Mengyuan; Yong, Chern-Han; Wozniak, Michal; Wong, Limsoon
2013-01-01
H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies.
Zhao, Weixiang; Davis, Cristina E.
2011-01-01
Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033
Health effects on leaders and co-workers of an art-based leadership development program.
Romanowska, Julia; Larsson, Gerry; Eriksson, Maria; Wikström, Britt-Maj; Westerlund, Hugo; Theorell, Töres
2011-01-01
There are very few evaluations of the effectiveness of leadership development programs. The purpose of the study was to examine whether an art-based leadership program may have a more beneficial effect than a conventional one on leaders' and their corresponding subordinates' mental and biological stress. Participating leaders were randomized to 2 year-long leadership programs, 1 art-based and 1 conventional, with follow-up of the leaders and their subordinates at 12 and 18 months. The art-based program built on an experimental theatre form, a collage of literary text and music, followed by writing and discussions focused on existential and ethical problems. After 18 months a pattern was clearly visible with advantage for the art-based group. In the art group (leaders and their subordinates together as well as for subordinates only) compared to the conventional group, there was a significant improvement of mental health, covert coping and performance-based self-esteem as well as significantly less winter/fall deterioration in the serum concentration of the regenerative/anabolic hormone dehydroepiandrosterone-sulfate. Our findings indicate a more beneficial long-term health effect of the art-based intervention compared to a conventional approach. Positive results for both standardized questionnaires and biological parameters strengthened the findings. The study provides a rationale for further evaluation of the effectiveness of this alternative educational approach. Copyright © 2010 S. Karger AG, Basel.
A Protein Domain and Family Based Approach to Rare Variant Association Analysis.
Richardson, Tom G; Shihab, Hashem A; Rivas, Manuel A; McCarthy, Mark I; Campbell, Colin; Timpson, Nicholas J; Gaunt, Tom R
2016-01-01
It has become common practice to analyse large scale sequencing data with statistical approaches based around the aggregation of rare variants within the same gene. We applied a novel approach to rare variant analysis by collapsing variants together using protein domain and family coordinates, regarded to be a more discrete definition of a biologically functional unit. Using Pfam definitions, we collapsed rare variants (Minor Allele Frequency ≤ 1%) together in three different ways 1) variants within single genomic regions which map to individual protein domains 2) variants within two individual protein domain regions which are predicted to be responsible for a protein-protein interaction 3) all variants within combined regions from multiple genes responsible for coding the same protein domain (i.e. protein families). A conventional collapsing analysis using gene coordinates was also undertaken for comparison. We used UK10K sequence data and investigated associations between regions of variants and lipid traits using the sequence kernel association test (SKAT). We observed no strong evidence of association between regions of variants based on Pfam domain definitions and lipid traits. Quantile-Quantile plots illustrated that the overall distributions of p-values from the protein domain analyses were comparable to that of a conventional gene-based approach. Deviations from this distribution suggested that collapsing by either protein domain or gene definitions may be favourable depending on the trait analysed. We have collapsed rare variants together using protein domain and family coordinates to present an alternative approach over collapsing across conventionally used gene-based regions. Although no strong evidence of association was detected in these analyses, future studies may still find value in adopting these approaches to detect previously unidentified association signals.
Study of Systems and Technology for Liquid Hydrogen Production Independent of Fossil Fuels
NASA Technical Reports Server (NTRS)
Sprafka, R. J.; Escher, W. J. D.; Foster, R. W.; Tison, R. R.; Shingleton, J.; Moore, J. S.; Baker, C. R.
1983-01-01
Based on Kennedy Space Center siting and logistics requirements and the nonfossil energy resources at the Center, a number of applicable technologies and system candidates for hydrogen production were identified and characterized. A two stage screening of these technologies in the light of specific criteria identified two leading candidates as nonfossil system approaches. Conceptual design and costing of two solar-operated, stand alone systems, one photovoltaic based on and the other involving the power tower approach reveals their technical feasibility as sited as KSC, and the potential for product cost competitiveness with conventional supply approaches in the 1990 to 1210 time period. Conventional water hydrolysis and hydrogen liquefaction subsystems are integrated with the solar subsystems.
Pabel, Sven-Olav; Pabel, Anne-Kathrin; Schmickler, Jan; Schulz, Xenia; Wiegand, Annette
2017-09-01
The aim of this study was to evaluate if differential learning in a preclinical dental course impacted the performance of dental students in a practical exam (preparation of a gold partial crown) immediately after the training session and 20 weeks later compared to conventional learning. This controlled study was performed in a preclinical course in operative dentistry at a dental school in Germany. Third-year students were trained in preparing gold partial crowns by using either the conventional learning (n=41) or the differential learning approach (n=32). The differential learning approach consisted of 20 movement exercises with a continuous change of movement execution during the learning session, while the conventional learning approach was mainly based on repetition, a methodological series of exercises, and correction of preparations during the training phase. Practical exams were performed immediately after the training session (T1) and 20 weeks later (T2, retention test). Preparations were rated by four independent and blinded examiners. At T1, no significant difference between the performance (exam passed) of the two groups was detected (conventional learning: 54.3%, differential learning: 68.0%). At T2, significantly more students passed the exam when trained by the differential learning approach (68.8%) than by the conventional learning approach (18.9%). Interrater reliability was moderate (Kappa: 0.57, T1) or substantial (Kappa: 0.67, T2), respectively. These results suggest that a differential learning approach can increase the manual skills of dental students.
ERIC Educational Resources Information Center
Kurki-Suonio, T.; Hakola, A.
2007-01-01
In the present paper, we propose an alternative, based on constructivism, to the conventional way of teaching basic physics courses at the university level. We call this approach "coherent teaching" and the underlying philosophy of teaching science and engineering "need-based learning". We have been applying this philosophy in…
NASA Astrophysics Data System (ADS)
Zeraatpisheh, Mojtaba; Ayoubi, Shamsollah; Jafari, Azam; Finke, Peter
2017-05-01
The efficiency of different digital and conventional soil mapping approaches to produce categorical maps of soil types is determined by cost, sample size, accuracy and the selected taxonomic level. The efficiency of digital and conventional soil mapping approaches was examined in the semi-arid region of Borujen, central Iran. This research aimed to (i) compare two digital soil mapping approaches including Multinomial logistic regression and random forest, with the conventional soil mapping approach at four soil taxonomic levels (order, suborder, great group and subgroup levels), (ii) validate the predicted soil maps by the same validation data set to determine the best method for producing the soil maps, and (iii) select the best soil taxonomic level by different approaches at three sample sizes (100, 80, and 60 point observations), in two scenarios with and without a geomorphology map as a spatial covariate. In most predicted maps, using both digital soil mapping approaches, the best results were obtained using the combination of terrain attributes and the geomorphology map, although differences between the scenarios with and without the geomorphology map were not significant. Employing the geomorphology map increased map purity and the Kappa index, and led to a decrease in the 'noisiness' of soil maps. Multinomial logistic regression had better performance at higher taxonomic levels (order and suborder levels); however, random forest showed better performance at lower taxonomic levels (great group and subgroup levels). Multinomial logistic regression was less sensitive than random forest to a decrease in the number of training observations. The conventional soil mapping method produced a map with larger minimum polygon size because of traditional cartographic criteria used to make the geological map 1:100,000 (on which the conventional soil mapping map was largely based). Likewise, conventional soil mapping map had also a larger average polygon size that resulted in a lower level of detail. Multinomial logistic regression at the order level (map purity of 0.80), random forest at the suborder (map purity of 0.72) and great group level (map purity of 0.60), and conventional soil mapping at the subgroup level (map purity of 0.48) produced the most accurate maps in the study area. The multinomial logistic regression method was identified as the most effective approach based on a combined index of map purity, map information content, and map production cost. The combined index also showed that smaller sample size led to a preference for the order level, while a larger sample size led to a preference for the great group level.
Biolik, A; Heide, S; Lessig, R; Hachmann, V; Stoevesandt, D; Kellner, J; Jäschke, C; Watzke, S
2018-04-01
One option for improving the quality of medical post mortem examinations is through intensified training of medical students, especially in countries where such a requirement exists regardless of the area of specialisation. For this reason, new teaching and learning methods on this topic have recently been introduced. These new approaches include e-learning modules or SkillsLab stations; one way to objectify the resultant learning outcomes is by means of the OSCE process. However, despite offering several advantages, this examination format also requires considerable resources, in particular in regards to medical examiners. For this reason, many clinical disciplines have already implemented computer-based OSCE examination formats. This study investigates whether the conventional exam format for the OSCE forensic "Death Certificate" station could be replaced with a computer-based approach in future. For this study, 123 students completed the OSCE "Death Certificate" station, using both a computer-based and conventional format, half starting with the Computer the other starting with the conventional approach in their OSCE rotation. Assignment of examination cases was random. The examination results for the two stations were compared and both overall results and the individual items of the exam checklist were analysed by means of inferential statistics. Following statistical analysis of examination cases of varying difficulty levels and correction of the repeated measures effect, the results of both examination formats appear to be comparable. Thus, in the descriptive item analysis, while there were some significant differences between the computer-based and conventional OSCE stations, these differences were not reflected in the overall results after a correction factor was applied (e.g. point deductions for assistance from the medical examiner was possible only at the conventional station). Thus, we demonstrate that the computer-based OSCE "Death Certificate" station is a cost-efficient and standardised format for examination that yields results comparable to those from a conventional format exam. Moreover, the examination results also indicate the need to optimize both the test itself (adjusting the degree of difficulty of the case vignettes) and the corresponding instructional and learning methods (including, for example, the use of computer programmes to complete the death certificate in small group formats in the SkillsLab). Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Azadi, Hossein; Taube, Friedhelm; Taheri, Fatemeh
2017-06-05
The co-existence approach of GM crops with conventional agriculture and organic farming as a feasible agricultural farming system has recently been placed in the center of hot debates at the EU-level and become a source of anxiety in developing countries. The main promises of this approach is to ensure "food security" and "food safety" on the one hand, and to avoid the adventitious presence of GM crops in conventional and organic farming on the other, as well as to present concerns in many debates on implementing the approach in developing countries. Here, we discuss the main debates on ("what," "why," "who," "where," "which," and "how") applying this approach in developing countries and review the main considerations and tradeoffs in this regard. The paper concludes that a peaceful co-existence between GM, conventional, and organic farming is not easy but is still possible. The goal should be to implement rules that are well-established proportionately, efficiently and cost-effectively, using crop-case, farming system-based and should be biodiversity-focused ending up with "codes of good agricultural practice" for co-existence.
Action change detection in video using a bilateral spatial-temporal constraint
NASA Astrophysics Data System (ADS)
Tian, Jing; Chen, Li
2016-08-01
Action change detection in video aims to detect action discontinuity in video. The silhouettes-based features are desirable for action change detection. This paper studies the problem of silhouette-quality assessment. For that, a non-reference approach without the need for ground truth is proposed in this paper to evaluate the quality of silhouettes, by exploiting both the boundary contrast of the silhouettes in the spatial domain and the consistency of the silhouettes in the temporal domain. This is in contrast to that either only spatial information or only temporal information of silhouettes is exploited in conventional approaches. Experiments are conducted using artificially generated degraded silhouettes to show that the proposed approach outperforms conventional approaches to achieve more accurate quality assessment. Furthermore, experiments are performed to show that the proposed approach is able to improve the accuracy performance of conventional action change approaches in two human action video data-sets. The average runtime of the proposed approach for Weizmann action video data-set is 0.08 second for one frame using Matlab programming language. It is computationally efficient and potential to real-time implementations.
The National Shipbuilding Research Program, Computer Aided Process Planning for Shipyards
1986-08-01
Factory Simulation with Conventional Factory Planning Techniques Financial Justification of State-of-the-Art Investment: A Study Using CAPP I–5 T I T L...and engineer to order.” “Factory Simulation: Approach to Integration of Computer- Based Factory Simulation with Conventional Factory Planning Techniques
Development of a Lumped Element Circuit Model for Approximation of Dielectric Barrier Discharges
2011-08-01
dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond pulsed DBDs, which have been proposed...species for pulsed direct current (DC) dielectric barrier discharge (DBD) plasmas. Based on experimental observations, it is assumed that nanosecond...momentum-based approaches. Given the fundamental differences between the novel pulsed discharge approach and the more conventional momentum-based
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
NASA Astrophysics Data System (ADS)
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Goodwin, Cody R; Sherrod, Stacy D; Marasco, Christina C; Bachmann, Brian O; Schramm-Sapyta, Nicole; Wikswo, John P; McLean, John A
2014-07-01
A metabolic system is composed of inherently interconnected metabolic precursors, intermediates, and products. The analysis of untargeted metabolomics data has conventionally been performed through the use of comparative statistics or multivariate statistical analysis-based approaches; however, each falls short in representing the related nature of metabolic perturbations. Herein, we describe a complementary method for the analysis of large metabolite inventories using a data-driven approach based upon a self-organizing map algorithm. This workflow allows for the unsupervised clustering, and subsequent prioritization of, correlated features through Gestalt comparisons of metabolic heat maps. We describe this methodology in detail, including a comparison to conventional metabolomics approaches, and demonstrate the application of this method to the analysis of the metabolic repercussions of prolonged cocaine exposure in rat sera profiles.
Conventional vs Biomimetic Approaches to the Exploration of Mars
NASA Astrophysics Data System (ADS)
Ellery, A.
It is not usual to refer to convention in planetary exploration missions by virtue of the innovation required for such projects. The term conventional refers to the methodologies, tools and approaches typically adopted in engineering that are applied to such missions. Presented is a "conventional" Mars rover mission in which the author was involved - ExoMars - into which is interspersed references to examples where biomimetic approaches may yield superior capabilities. Biomimetics is a relatively recently active area of research which seeks to examine how biological systems solve the problem of survival in the natural environment. Biological organisms are autonomous entities that must survive in a hostile world adapting both adaptivity and robustness. It is not then surprising that biomimetics is particularly useful when applied to robotic elements of a Mars exploration mission. I present a number of areas in which biomimetics may yield new solutions to the problem of Mars exploration - optic flow navigation, potential field navigation, genetically-evolved neuro-controllers, legged locomotion, electric motors implementing muscular behaviour, and a biomimetic drill based on the wood wasp ovipositor. Each of these techniques offers an alternative approach to conventional ones. However, the perceptive hurdles are likely to dwarf the technical hurdles in implementing many of these methods in the near future.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Stringent DDI-based Prediction of H. sapiens-M. tuberculosis H37Rv Protein-Protein Interactions
2013-01-01
Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. Results We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some important properties of domains involved in host-pathogen PPIs. We find that both host and pathogen proteins involved in host-pathogen PPIs tend to have more domains than proteins involved in intra-species PPIs, and these domains have more interaction partners than domains on proteins involved in intra-species PPI. Conclusions The stringent DDI-based prediction approach reported in this work provides a stringent strategy for predicting host-pathogen PPIs. It also performs better than a conventional DDI-based approach in predicting PPIs. We have predicted a small set of accurate H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. PMID:24564941
Chiu, Hsin-Yi; Kang, Yi-No; Wang, Wei-Lin; Huang, Hung-Chang; Wu, Chien-Chih; Hsu, Wayne; Tong, Yiu-Shun; Wei, Po-Li
To evaluate the effectiveness of a simulation-based flipped classroom in gaining the laparoscopic skills in medical students. An intervention trial. Taipei Medical University Hospital, an academic teaching hospital. Fifty-nine medical students participating in a 1-hour laparoscopic skill training session were randomly assigned to a conventional classroom (n = 29) or a flipped classroom approach (n = 30) based on their registered order. At the end of the session, instructors assessed participants' performance in laparoscopic suturing and intracorporeal knot-tying using the assessment checklist based on a modified Objective Structured Assessment of Technical Skills tool. Students in the flipped group completed more numbers of stitches (mean [M] = 0.47; standard deviation [SD] = 0.507) than those in the conventional group (M = 0.10; SD = 0.310) (mean difference: 0.37; 95% CI: 0.114-582; p = 0.002). Moreover, students in the flipped group also had higher stitch quality scores (M = 7.17; SD = 2.730) than those in the conventional group (M = 5.14; SD = 1.767) (mean difference = 2.03; 95% CI: 0.83-3.228; p = 0.001). Meanwhile, students in the flipped group had higher pass rates for the second throw (p < 0.001), third throw (p = 0.002), appropriate tissue reapproximation without loosening or strangulation (p < 0.001), needle cut from suture under direct visualization (p = 0.004), and needle safely removed under direct visualization (p = 0.018) than those in the conventional group. Comparing with traditional approach, a simulation-based flipped classroom approach may improve laparoscopic intracorporeal knot-tying skill acquisition in medical students. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Clinical applications of cell-based approaches in alveolar bone augmentation: a systematic review.
Shanbhag, Siddharth; Shanbhag, Vivek
2015-01-01
Cell-based approaches, utilizing adult mesenchymal stem cells (MSCs), are reported to overcome the limitations of conventional bone augmentation procedures. The study aims to systematically review the available evidence on the characteristics and clinical effectiveness of cell-based ridge augmentation, socket preservation, and sinus-floor augmentation, compared to current evidence-based methods in human adult patients. MEDLINE, EMBASE, and CENTRAL databases were searched for related literature. Both observational and experimental studies reporting outcomes of "tissue engineered" or "cell-based" augmentation in ≥5 adult patients alone, or in comparison with non-cell-based (conventional) augmentation methods, were eligible for inclusion. Primary outcome was histomorphometric analysis of new bone formation. Effectiveness of cell-based augmentation was evaluated based on outcomes of controlled studies. Twenty-seven eligible studies were identified. Of these, 15 included a control group (8 randomized controlled trials [RCTs]), and were judged to be at a moderate-to-high risk of bias. Most studies reported the combined use of cultured autologous MSCs with an osteoconductive bone substitute (BS) scaffold. Iliac bone marrow and mandibular periosteum were frequently reported sources of MSCs. In vitro culture of MSCs took between 12 days and 1.5 months. A range of autogenous, allogeneic, xenogeneic, and alloplastic scaffolds was identified. Bovine bone mineral scaffold was frequently reported with favorable outcomes, while polylactic-polyglycolic acid copolymer (PLGA) scaffold resulted in graft failure in three studies. The combination of MSCs and BS resulted in outcomes similar to autogenous bone (AB) and BS. Three RCTs and one controlled trial reported significantly greater bone formation in cell-based than conventionally grafted sites after 3 to 8 months. Based on limited controlled evidence at a moderate-to-high risk of bias, cell-based approaches are comparable, if not superior, to current evidence-based bone grafting methods, with a significant advantage of avoiding AB harvesting. Future clinical trials should additionally evaluate patient-based outcomes and the time-/cost-effectiveness of these approaches. © 2013 Wiley Periodicals, Inc.
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie
2017-11-01
While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.
A Computer-Based Game That Promotes Mathematics Learning More than a Conventional Approach
ERIC Educational Resources Information Center
McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.; Forlizzi, Jodi
2017-01-01
Excitement about learning from computer-based games has been papable in recent years and has led to the development of many educational games. However, there are relatively few sound empirical studies in the scientific literature that have shown the benefits of learning mathematics from games as opposed to more traditional approaches. The…
ERIC Educational Resources Information Center
Jungert, Tomas; Rosander, Michael
2009-01-01
The purpose of this study was to investigate the relationship between student influence, students' strategic approaches to studying and academic achievement, and to examine differences between students in a Master's programme in Engineering with conventional teaching and one based on problem-based learning in a sample of 268 students. A version of…
Barriers to the Entry of Biofield Healing Into “Mainstream” Healthcare
Sprengel, Meredith; Ives, John A.; Jonas, Wayne
2015-01-01
In this article, we describe barriers to the entry of biofield healing into mainstream contemporary science and clinical practice. We focus on obstacles that arise from the social nature of the scientific enterprise, an aspect of science highlighted by the influential work of Thomas Kuhn (1922-1996), one of the most important— and controversial—philosophers of science in the 20th century. Kuhn analyzed science and its revolutionary changes in terms of the dynamics within scientific communities. Kuhn's approach helps us understand unconventional medical theories and practices such as biofield healing. For many years, these were called “complementary and alternative medicine” (CAM). However, because most people use nonmainstream approaches in conjunction with conventional treatments, the National Institutes of Health and many practitioners now prefer “Complementary and Integrative Medicine” (CIM) where integrative implies “bringing conventional and complementary approaches together in a coordinated way.”1 Biofield healing fits the integrative model well, provides a novel approach to therapeutic intervention, and is developing in a manner that can integrate with current medical science in simple ways. Yet, it still remains outside the conventional framework because of its conceptual bases, which contrast sharply with conventional assumptions regarding the nature of reality. PMID:26665046
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
A new window of opportunity to reject process-based biotechnology regulation
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
ABSTRACT. The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach. PMID:26930116
A new window of opportunity to reject process-based biotechnology regulation.
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
Identifying novel biomarkers in sarcoidosis using genome-based approaches
Knox, Kenneth S.; Garcia, Joe G.N.
2015-01-01
Synopsis We briefly review conventional biomarkers used clinically to 1) support a diagnosis and 2) monitor disease progression in patients with sarcoidosis. We describe potential new biomarkers identified by genome-wide screening and the approaches to discover these biomarkers. PMID:26593137
Gajjar, Ketan; Ahmadzai, Abdullah A.; Valasoulis, George; Trevisan, Júlio; Founta, Christina; Nasioutziki, Maria; Loufopoulos, Aristotelis; Kyrgiou, Maria; Stasinou, Sofia Melina; Karakitsos, Petros; Paraskevaidis, Evangelos; Da Gama-Rose, Bianca; Martin-Hirsch, Pierre L.; Martin, Francis L.
2014-01-01
Background Subjective visual assessment of cervical cytology is flawed, and this can manifest itself by inter- and intra-observer variability resulting ultimately in the degree of discordance in the grading categorisation of samples in screening vs. representative histology. Biospectroscopy methods have been suggested as sensor-based tools that can deliver objective assessments of cytology. However, studies to date have been apparently flawed by a corresponding lack of diagnostic efficiency when samples have previously been classed using cytology screening. This raises the question as to whether categorisation of cervical cytology based on imperfect conventional screening reduces the diagnostic accuracy of biospectroscopy approaches; are these latter methods more accurate and diagnose underlying disease? The purpose of this study was to compare the objective accuracy of infrared (IR) spectroscopy of cervical cytology samples using conventional cytology vs. histology-based categorisation. Methods Within a typical clinical setting, a total of n = 322 liquid-based cytology samples were collected immediately before biopsy. Of these, it was possible to acquire subsequent histology for n = 154. Cytology samples were categorised according to conventional screening methods and subsequently interrogated employing attenuated total reflection Fourier-transform IR (ATR-FTIR) spectroscopy. IR spectra were pre-processed and analysed using linear discriminant analysis. Dunn’s test was applied to identify the differences in spectra. Within the diagnostic categories, histology allowed us to determine the comparative efficiency of conventional screening vs. biospectroscopy to correctly identify either true atypia or underlying disease. Results Conventional cytology-based screening results in poor sensitivity and specificity. IR spectra derived from cervical cytology do not appear to discriminate in a diagnostic fashion when categories were based on conventional screening. Scores plots of IR spectra exhibit marked crossover of spectral points between different cytological categories. Although, significant differences between spectral bands in different categories are noted, crossover samples point to the potential for poor specificity and hampers the development of biospectroscopy as a diagnostic tool. However, when histology-based categories are used to conduct analyses, the scores plot of IR spectra exhibit markedly better segregation. Conclusions Histology demonstrates that ATR-FTIR spectroscopy of liquid-based cytology identifies the presence of underlying atypia or disease missed in conventional cytology screening. This study points to an urgent need for a future biospectroscopy study where categories are based on such histology. It will allow for the validation of this approach as a screening tool. PMID:24404130
Use of the ICRP system for the protection of marine ecosystems.
Telleria, D; Cabianca, T; Proehl, G; Kliaus, V; Brown, J; Bossio, C; Van der Wolf, J; Bonchuk, I; Nilsen, M
2015-06-01
The International Commission on Radiological Protection (ICRP) recently reinforced the international system of radiological protection, initially focused on humans, by identifying principles of environmental protection and proposing a framework for assessing impacts of ionising radiation on non-human species, based on a reference flora and fauna approach. For this purpose, ICRP developed dosimetric models for a set of Reference Animals and Plants, which are representative of flora and fauna in different environments (terrestrial, freshwater, marine), and produced criteria based on information on radiation effects, with the aim of evaluating the level of potential or actual radiological impacts, and as an input for decision making. The approach developed by ICRP for flora and fauna is consistent with the approach used to protect humans. The International Atomic Energy Agency (IAEA) includes considerations on the protection of the environment in its safety standards, and is currently developing guidelines to assess radiological impacts based on the aforementioned ICRP approach. This paper presents the method developed by IAEA, in a series of meetings with international experts, to enable assessment of the radiological impact to the marine environment in connection with the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter 1972 (London Convention 1972). This method is based on IAEA's safety standards and ICRP's recommendations, and was presented in 2013 for consideration by representatives of the contracting parties of the London Convention 1972; it was approved for inclusion in its procedures, and is in the process of being incorporated into guidelines. © The International Society for Prosthetics and Orthotics Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Mohod, Ashish V; Subudhi, Abhijeet S; Gogate, Parag R
2017-05-01
Using sustainable feed stock such as non-edible oil for the biodiesel production can be one of the cost effective approaches considering the ever growing interest towards renewable energy and problems in existing approaches for production. However, due to the high free fatty acid content, non-edible oils require considerable preprocessing before the actual transesterification reaction for biodiesel production. The present work focuses on intensification of the esterification reaction used as preprocessing step based on acoustic and hydrodynamic cavitation also presenting the comparison with the conventional approach. Karanja oil with initial acid value as 14.15mg of KOH/g of oil has been used as a sustainable feedstock. Effect of operating parameters such as molar ratio, catalyst loading, temperature and type of catalyst (sulfuric acid and Amberlyst-15) on the acid value reduction has been investigated. The maximum reduction in the acid value (final acid value as 2.7mg of KOH/g of oil) was obtained using acoustic cavitation at optimum molar ratio of oil to methanol as 1:5 and 2% sulfuric acid loading at ambient temperature. In the case of hydrodynamic cavitation, acid value reduced upto 4.2mg of KOH under optimized conditions of first stage processing. In the second stage esterification using hydrodynamic cavitation and conventional approach, the final acid value was 3.6 and 3.8mg of KOH/g of oil respectively. Energy requirement analysis for ultrasound and conventional approaches clearly established the superiority of the ultrasound based approach. The present study clearly demonstrated that significant intensification benefits can be obtained in terms of the reduction in the molar ratio and operating temperature for the case of acoustic cavitation as compared to the conventional approach with somewhat lower effects for the hydrodynamic cavitation. Copyright © 2016 Elsevier B.V. All rights reserved.
TOPS On-Line: Automating the Construction and Maintenance of HTML Pages
NASA Technical Reports Server (NTRS)
Jones, Kennie H.
1994-01-01
After the Technology Opportunities Showcase (TOPS), in October, 1993, Langley Research Center's (LaRC) Information Systems Division (ISD) accepted the challenge to preserve the investment in information assembled in the TOPS exhibits by establishing a data base. Following the lead of several people at LaRC and others around the world, the HyperText Transport Protocol (HTTP) server and Mosaic were the obvious tools of choice for implementation. Initially, some TOPS exhibitors began the conventional approach of constructing HyperText Markup Language (HTML) pages of their exhibits as input to Mosaic. Considering the number of pages to construct, a better approach was conceived that would automate the construction of pages. This approach allowed completion of the data base construction in a shorter period of time using fewer resources than would have been possible with the conventional approach. It also provided flexibility for the maintenance and enhancement of the data base. Since that time, this approach has been used to automate construction of other HTML data bases. Through these experiences, it is concluded that the most effective use of the HTTP/Mosaic technology will require better tools and techniques for creating, maintaining and managing the HTML pages. The development and use of these tools and techniques are the subject of this document.
Controlling bridging and pinching with pixel-based mask for inverse lithography
NASA Astrophysics Data System (ADS)
Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan
2016-03-01
Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Hydrography synthesis using LANDSAT remote sensing and the SCS models
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.
1976-01-01
The land cover requirements of the Soil Conservation Service (SCS) Model used for hydrograph synthesis in urban areas were modified to be LANDSAT compatible. The Curve Numbers obtained with these alternate land cover categories compare well with those obtained in published example problems using the conventional categories. Emergency spillway hydrographs and synthetic flood frequency flows computed for a 21.1 sq. mi. test area showed excellent agreement between the conventional aerial photo-based and the Landsat-based SCS approaches.
Global Optimization of Emergency Evacuation Assignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Lee; Yuan, Fang; Chin, Shih-Miao
2006-01-01
Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Puncturing the myths of acupuncture.
Mallory, Molly J; Do, Alexander; Bublitz, Sara E; Veleber, Susan J; Bauer, Brent A; Bhagra, Anjali
2016-09-01
Acupuncture is a widely practiced system of medicine that has been in place for thousands of years. Consumer interest and use of acupuncture are becoming increasingly popular in the United States, as it is used to treat a multitude of symptoms and disease processes as well as to maintain health and prevent illness. A growing body of evidence increasingly validates the practice of acupuncture. Further developing scientific data will play an important role in the future of acupuncture and other complementary and alternative medicines in public health. Acupuncture is commonly used concurrently with conventional medicine. Although acupuncture is embraced by consumers and medical professionals, misconceptions abound. We have explored and dispelled ten misconceptions common to the practice of acupuncture, utilizing an evidence-based approach. As the trend of merging conventional medical care with acupuncture treatment grows, it is important to develop a conceptual model of integrative medicine. Using a scientific evidence approach will create a structure from which to begin and grow confidence among conventional medical providers. Acupuncture is a safe and effective modality when performed properly by trained professionals. Educating both the consumer and medical community is important to enable appropriate and evidence-based applications of acupuncture and integration with conventional medicine for high-quality patient care.
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
Novais, J L; Titchener-Hooker, N J; Hoare, M
2001-10-20
Time to market, cost effectiveness, and flexibility are key issues in today's biopharmaceutical market. Bioprocessing plants based on fully disposable, presterilized, and prevalidated components appear as an attractive alternative to conventional stainless steel plants, potentially allowing for shorter implementation times, smaller initial investments, and increased flexibility. To evaluate the economic case of such an alternative it was necessary to develop an appropriate costing model which allows an economic comparison between conventional and disposables-based engineering to be made. The production of an antibody fragment from an E. coli fermentation was used to provide a case study for both routes. The conventional bioprocessing option was costed through available models, which were then modified to account for the intrinsic differences observed in a disposables-based option. The outcome of the analysis indicates that the capital investment required for a disposables-based option is substantially reduced at less than 60% of that for a conventional option. The disposables-based running costs were evaluated as being 70% higher than those of the conventional equivalent. Despite this higher value, the net present value (NPV) of the disposables-based plant is positive and within 25% of that for the conventional plant. Sensitivity analysis performed on key variables indicated the robustness of the economic analysis presented. In particular a 9-month reduction in time to market arising from the adoption of a disposables-based approach, results in a NPV which is identical to that of the conventional option. Finally, the effect of any possible loss in yield resulting from the use of disposables was also examined. This had only a limited impact on the NPV: for example, a 50% lower yield in the disposable chromatography step results in a 10% reduction of the disposable NPV. The results provide the necessary framework for the economic comparison of disposables and conventional bioprocessing technologies. Copyright 2001 John Wiley & Sons, Inc.
Protecting Biodiversity when Money Matters: Maximizing Return on Investment
Underwood, Emma C.; Shaw, M. Rebecca; Wilson, Kerrie A.; Kareiva, Peter; Klausmeyer, Kirk R.; McBride, Marissa F.; Bode, Michael; Morrison, Scott A.; Hoekstra, Jonathan M.; Possingham, Hugh P.
2008-01-01
Background Conventional wisdom identifies biodiversity hotspots as priorities for conservation investment because they capture dense concentrations of species. However, density of species does not necessarily imply conservation ‘efficiency’. Here we explicitly consider conservation efficiency in terms of species protected per dollar invested. Methodology/Principal Findings We apply a dynamic return on investment approach to a global biome and compare it with three alternate priority setting approaches and a random allocation of funding. After twenty years of acquiring habitat, the return on investment approach protects between 32% and 69% more species compared to the other priority setting approaches. To correct for potential inefficiencies of protecting the same species multiple times we account for the complementarity of species, protecting up to three times more distinct vertebrate species than alternate approaches. Conclusions/Significance Incorporating costs in a return on investment framework expands priorities to include areas not traditionally highlighted as priorities based on conventional irreplaceability and vulnerability approaches. PMID:18231601
The German Aortic Valve Registry (GARY): in-hospital outcome
Hamm, Christian W.; Möllmann, Helge; Holzhey, David; Beckmann, Andreas; Veit, Christof; Figulla, Hans-Reiner; Cremer, J.; Kuck, Karl-Heinz; Lange, Rüdiger; Zahn, Ralf; Sack, Stefan; Schuler, Gerhard; Walther, Thomas; Beyersdorf, Friedhelm; Böhm, Michael; Heusch, Gerd; Funkat, Anne-Kathrin; Meinertz, Thomas; Neumann, Till; Papoutsis, Konstantinos; Schneider, Steffen; Welz, Armin; Mohr, Friedrich W.
2014-01-01
Background Aortic stenosis is a frequent valvular disease especially in elderly patients. Catheter-based valve implantation has emerged as a valuable treatment approach for these patients being either at very high risk for conventional surgery or even deemed inoperable. The German Aortic Valve Registry (GARY) provides data on conventional and catheter-based aortic procedures on an all-comers basis. Methods and results A total of 13 860 consecutive patients undergoing repair for aortic valve disease [conventional surgery and transvascular (TV) or transapical (TA) catheter-based techniques] have been enrolled in this registry during 2011 and baseline, procedural, and outcome data have been acquired. The registry summarizes the results of 6523 conventional aortic valve replacements without (AVR) and 3464 with concomitant coronary bypass surgery (AVR + CABG) as well as 2695 TV AVI and 1181 TA interventions (TA AVI). Patients undergoing catheter-based techniques were significantly older and had higher risk profiles. The stroke rate was low in all groups with 1.3% (AVR), 1.9% (AVR + CABG), 1.7% (TV AVI), and 2.3% (TA AVI). The in-hospital mortality was 2.1% (AVR) and 4.5% (AVR + CABG) for patients undergoing conventional surgery, and 5.1% (TV AVI) and AVI 7.7% (TA AVI). Conclusion The in-hospital outcome results of this registry show that conventional surgery yields excellent results in all risk groups and that catheter-based aortic valve replacements is an alternative to conventional surgery in high risk and elderly patients. PMID:24022003
Automated audiometry using apple iOS-based application technology.
Foulad, Allen; Bui, Peggy; Djalilian, Hamid
2013-11-01
The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.
The Resilience Assessment Framework: a common indicator for land management?
NASA Astrophysics Data System (ADS)
Cowie, Annette; Metternicht, Graciela; O'Connell, Deborah
2015-04-01
At the Rio+20 conference in June 2013, the United Nations Convention to Combat Desertification (UNCCD), the Convention on Biological Diversity (CBD), and the United Nations Framework Convention on Climate Change (UNFCCC) reinforced their mutual interests in building linkages between biodiversity conservation, sustainable land management, and climate change mitigation and adaptation. The UNCCD sees building resilience of agro-ecosystems as a common interest that could strengthen linkages between the conventions and deliver synergies in progressing goals of each of the conventions. Furthermore, enhancing resilience of productive agro-ecosystems is fundamental to food security and sustainable development, and thus aligns with the Sustainable Development Goals (SDGs). The Global Environment Facility (GEF) shares the interest of the conventions in building resilience in agro-ecosystems. Indicators of resilience are required for monitoring progress in these endeavors, application of a common indicator between the UNCCD, UNFCCC and CBD as a measure of both land-based adaptation and ecosystem resilience, could strengthen links between the conventions and increase attention to the broad benefits of improved land management. Consequently, the Scientific and Technical Advisory Panel (STAP) to the GEF commissioned the Commonwealth Scientific and Industrial Research Organisation (CSIRO) to produce a report reviewing the conceptual basis for resilience, and proposing an indicator approach that could meet the needs of the Conventions and the GEF for an indicator of agro-ecosystem resilience and land-based adaption. The paper presents a synthesis of scientific understanding of resilience in agro-ecosystems, reviews indicators that have been proposed, and, having concluded that none of the extant indicator approaches adequately assesses resilience of agro-ecosystems, proposes a new approach to the assessment of resilience. Recognizing that no single indicator of resilience is applicable to all agro-ecosystems, and that involvement of stakeholders is critical to discerning the critical variables to be assessed, the proposed framework uses an iterative participatory approach to characterise the system, considering also interactions across and within scales; identify the controlling variables, and assess proximity to thresholds, and adaptive capacity. The framework consists of four elements: Element A: System description; Element B Assessing the system; Element C Adaptive governance and management; Element D Participatory process. Element D is intended as a cross-cutting element, applying across Elements A to C, although Elements A and B can be applied as a desktop activity in a preliminary assessment. The results of the assessment are synthesised in "Resilience action indicators", that summarise the state of the system with respect to the need to adapt or transform. The presentation will summarise the framework and the responses of expert reviewers who identified strengths of the approach, and challenges for implementation, particularly at program and national scales. The presentation will emphasise the conceptual basis for the approach, and the role of scientists in testing, refining and operationalizing the approach.
Emulating natural disturbance regimes: an emerging approach for sustainable forest management
M. North; W Keeton
2008-01-01
Sustainable forest management integrates ecological, social, and economic objectives. To achieve the former, researchers and practitioners are modifying silvicultural practices based on concepts from successional and landscape ecology to provide a broader array of ecosystem functions than is associated with conventional approaches. One...
Computational Design of a Krueger Flap Targeting Conventional Slat Aerodynamics
NASA Technical Reports Server (NTRS)
Akaydin, H. Dogus; Housman, Jeffrey A.; Kiris, Cetin C.; Bahr, Christopher J.; Hutcheson, Florence V.
2016-01-01
In this study, we demonstrate the design of a Krueger flap as a substitute for a conventional slat in a high-lift system. This notional design, with the objective of matching equivalent-mission performance on aircraft approach, was required for a comparative aeroacoustic study with computational and experimental components. We generated a family of high-lift systems with Krueger flaps based on a set of design parameters. Then, we evaluated the high-lift systems using steady 2D RANS simulations to find a good match for the conventional slat, based on total lift coefficients in free-air. Finally, we evaluated the mean aerodynamics of the high-lift systems with Krueger flap and conventional slat as they were installed in an open-jet wind tunnel flow. The surface pressures predicted with the simulations agreed well with experimental results.
A new approach to modeling aviation accidents
NASA Astrophysics Data System (ADS)
Rao, Arjun Harsha
General Aviation (GA) is a catchall term for all aircraft operations in the US that are not categorized as commercial operations or military flights. GA aircraft account for almost 97% of the US civil aviation fleet. Unfortunately, GA flights have a much higher fatal accident rate than commercial operations. Recent estimates by the Federal Aviation Administration (FAA) showed that the GA fatal accident rate has remained relatively unchanged between 2010 and 2015, with 1566 fatal accidents accounting for 2650 fatalities. Several research efforts have been directed towards betters understanding the causes of GA accidents. Many of these efforts use National Transportation Safety Board (NTSB) accident reports and data. Unfortunately, while these studies easily identify the top types of accidents (e.g., inflight loss of control (LOC)), they usually cannot identify why these accidents are happening. Most NTSB narrative reports for GA accidents are very short (many are only one paragraph long), and do not contain much information on the causes (likely because the causes were not fully identified). NTSB investigators also code each accident using an event-based coding system, which should facilitate identification of patterns and trends in causation, given the high number of GA accidents each year. However, this system is susceptible to investigator interpretation and error, meaning that two investigators may code the same accident differently, or omit applicable codes. To facilitate a potentially better understanding of GA accident causation, this research develops a state-based approach to check for logical gaps or omissions in NTSB accident records, and potentially fills-in the omissions. The state-based approach offers more flexibility as it moves away from the conventional event-based representation of accidents, which classifies events in accidents into several categories such as causes, contributing factors, findings, occurrences, and phase of flight. The method views aviation accidents as a set of hazardous states of a system (pilot and aircraft), and triggers that cause the system to move between hazardous states. I used the NTSB's accident coding manual (that contains nearly 4000 different codes) to develop a "dictionary" of hazardous states, triggers, and information codes. Then, I created the "grammar", or a set of rules, that: (1) orders the hazardous states in each accident; and, (2) links the hazardous states using the appropriate triggers. This approach: (1) provides a more correct count of the causes for accidents in the NTSB database; and, (2) checks for gaps or omissions in NTSB accident data, and fills in some of these gaps using logic-based rules. These rules also help identify and count causes for accidents that were not discernable from previous analyses of historical accident data. I apply the model to 6200 helicopter accidents that occurred in the US between 1982 and 2015. First, I identify the states and triggers that are most likely to be associated with fatal and non-fatal accidents. The results suggest that non-fatal accidents, which account for approximately 84% of the accidents, provide valuable opportunities to learn about the causes for accidents. Next, I investigate the causes of inflight loss of control using both a conventional approach and using the state-based approach. The conventional analysis provides little insight into the causal mechanism for LOC. For instance, the top cause of LOC is "aircraft control/directional control not maintained", which does not provide any insight. In contrast, the state-based analysis showed that pilots' tendency to clip objects frequently triggered LOC (16.7% of LOC accidents)--this finding was not directly discernable from conventional analyses. Finally, I investigate the causes for improper autorotations using both a conventional approach and the state-based approach. The conventional approach uses modifiers (e.g., "improper", "misjudged") associated with "24520: Autorotation" to identify improper autorotations in the pre-2008 system. In the psot-2008 system, the NTSB represents autorotation as a phase of flight, which has no modifier--making it impossible to determine if the autorotation was unsuccessful. In contrast, the state-based analysis identified 632 improper autorotation accidents, compared to 174 with a conventional analysis. Results from the state-based analysis show that not maintaining rotor RPM and improper flare were among the top reasons for improper autorotations. The presence of the "not possible" trigger in 11.6% of improper autorotations, suggests that it was impossible to make an autorotative landing. Improper use of collective is the sixth most frequent trigger for improper autorotation. Correct use of collective pitch control is crucial to maintain rotor RPM during an autorotation (considering that engines are generally not operational during autorotations).
High intensity ion beams from an atmospheric pressure inductively coupled plasma
NASA Astrophysics Data System (ADS)
Al Moussalami, S.; Chen, W.; Collings, B. A.; Douglas, D. J.
2002-02-01
This work is directed towards substantially improving the sensitivity of an inductively coupled plasma mass spectrometer (ICP-MS). Ions produced in the ICP at atmospheric pressure have been extracted with comparatively high current densities. The conventional approach to ion extraction, based on a skimmed molecular beam, has been abandoned, and a high extraction field arrangement has been adopted. Although the new approach is not optimized, current densities more than 180 times greater than that of a conventional interface have been extracted and analyte sensitivities ˜10-100× greater than those reported previously for quadrupole ICP-MS have been measured.
Expectation-Based Control of Noise and Chaos
NASA Technical Reports Server (NTRS)
Zak, Michael
2006-01-01
A proposed approach to control of noise and chaos in dynamic systems would supplement conventional methods. The approach is based on fictitious forces composed of expectations governed by Fokker-Planck or Liouville equations that describe the evolution of the probability densities of the controlled parameters. These forces would be utilized as feedback control forces that would suppress the undesired diffusion of the controlled parameters. Examples of dynamic systems in which the approach is expected to prove beneficial include spacecraft, electronic systems, and coupled lasers.
van Barneveld, E M; Lamers, L M; van Vliet, R C; van de Ven, W P
2000-02-01
Under inadequate capitation formulae competing health insurers have an incentive for cream skimming, i.e., the selection of enrollees whom the insurer expects to be profitable. When evaluating different capitation formulae, previous studies used various indicators of incentives for cream skimming. These conventional indicators are based on all actual profits and losses or on all predictable profits and losses. For the latter type of indicators, this paper proposes, as a new approach, to ignore the small predictable profits and losses. We assume that this new approach provides a better indication of the size of the cream skimming problem than the conventional one, because an insurer has to take into account its costs of cream skimming and the (statistical) uncertainties about the net benefits of cream skimming. Both approaches are applied in theoretical and empirical analyses. The results show that, if our assumption is right, the problem of cream skimming is overestimated by the conventional ways of measuring incentives for cream skimming, especially in the case of relatively good capitation formulae.
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
We advocate an approach to reduce the anticipated increase in stormwater runoff from conventional development by demonstrating a low-impact development that incorporates hydrologic factors into an expanded land suitability analysis. This methodology was applied to a 3 hectare exp...
Human Benchmarking of Expert Systems. Literature Review
1990-01-01
effetiveness of the development procedures used in order to predict whether the aplication of similar approaches will likely have effective and...they used in their learning and problem solving. We will describe these approaches later. Reasoning. Reasoning usually includes inference. Because to ... in the software engineering process. For example, existing approaches to software evaluation in the military are based on a model of conventional
Andrews, Jason R; Prajapati, Krishna G; Eypper, Elizabeth; Shrestha, Poojan; Shakya, Mila; Pathak, Kamal R; Joshi, Niva; Tiwari, Priyanka; Risal, Manisha; Koirala, Samir; Karkey, Abhilasha; Dongol, Sabina; Wen, Shawn; Smith, Amy B; Maru, Duncan; Basnyat, Buddha; Baker, Stephen; Farrar, Jeremy; Ryan, Edward T; Hohmann, Elizabeth; Arjyal, Amit
2013-01-01
In many rural areas at risk for enteric fever, there are few data on Salmonella enterica serotypes Typhi (S. Typhi) and Paratyphi (S. Paratyphi) incidence, due to limited laboratory capacity for microbiologic culture. Here, we describe an approach that permits recovery of the causative agents of enteric fever in such settings. This approach involves the use of an electricity-free incubator based upon use of phase-change materials. We compared this against conventional blood culture for detection of typhoidal Salmonella. Three hundred and four patients with undifferentiated fever attending the outpatient and emergency departments of a public hospital in the Kathmandu Valley of Nepal were recruited. Conventional blood culture was compared against an electricity-free culture approach. Blood from 66 (21.7%) patients tested positive for a Gram-negative bacterium by at least one of the two methods. Sixty-five (21.4%) patients tested blood culture positive for S. Typhi (30; 9.9%) or S. Paratyphi A (35; 11.5%). From the 65 individuals with culture-confirmed enteric fever, 55 (84.6%) were identified by the conventional blood culture and 60 (92.3%) were identified by the experimental method. Median time-to-positivity was 2 days for both procedures. The experimental approach was falsely positive due to probable skin contaminants in 2 of 239 individuals (0.8%). The percentages of positive and negative agreement for diagnosis of enteric fever were 90.9% (95% CI: 80.0%-97.0%) and 96.0% (92.7%-98.1%), respectively. After initial incubation, Salmonella isolates could be readily recovered from blood culture bottles maintained at room temperature for six months. A simple culture approach based upon a phase-change incubator can be used to isolate agents of enteric fever. This approach could be used as a surveillance tool to assess incidence and drug resistance of the etiologic agents of enteric fever in settings without reliable local access to electricity or local diagnostic microbiology laboratories.
CNN based approach for activity recognition using a wrist-worn accelerometer.
Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R
2017-07-01
In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.
Cycle-time determination and process control of sequencing batch membrane bioreactors.
Krampe, J
2013-01-01
In this paper a method to determine the cycle time for sequencing batch membrane bioreactors (SBMBRs) is introduced. One of the advantages of SBMBRs is the simplicity of adapting them to varying wastewater composition. The benefit of this flexibility can only be fully utilised if the cycle times are optimised for the specific inlet load conditions. This requires either proactive and ongoing operator adjustment or active predictive instrument-based control. Determination of the cycle times for conventional sequencing batch reactor (SBR) plants is usually based on experience. Due to the higher mixed liquor suspended solids concentrations in SBMBRs and the limited experience with their application, a new approach to calculate the cycle time had to be developed. Based on results from a semi-technical pilot plant, the paper presents an approach for calculating the cycle time in relation to the influent concentration according to the Activated Sludge Model No. 1 and the German HSG (Hochschulgruppe) Approach. The approach presented in this paper considers the increased solid contents in the reactor and the resultant shortened reaction times. This allows for an exact calculation of the nitrification and denitrification cycles with a tolerance of only a few minutes. Ultimately the same approach can be used for a predictive control strategy and for conventional SBR plants.
On standardization of low symmetry crystal fields
NASA Astrophysics Data System (ADS)
Gajek, Zbigniew
2015-07-01
Standardization methods of low symmetry - orthorhombic, monoclinic and triclinic - crystal fields are formulated and discussed. Two alternative approaches are presented, the conventional one, based on the second-rank parameters and the standardization based on the fourth-rank parameters. Mainly f-electron systems are considered but some guidelines for d-electron systems and the spin Hamiltonian describing the zero-field splitting are given. The discussion focuses on premises for choosing the most suitable method, in particular on inadequacy of the conventional one. Few examples from the literature illustrate this situation.
Modular reservoir concept for MEMS-based transdermal drug delivery systems
NASA Astrophysics Data System (ADS)
Cantwell, Cara T.; Wei, Pinghung; Ziaie, Babak; Rao, Masaru P.
2014-11-01
While MEMS-based transdermal drug delivery device development efforts have typically focused on tightly-integrated solutions, we propose an alternate conception based upon a novel, modular drug reservoir approach. By decoupling the drug storage functionality from the rest of the delivery system, this approach seeks to minimize cold chain storage volume, enhance compatibility with conventional pharmaceutical practices, and allow independent optimization of reservoir device design, materials, and fabrication. Herein, we report the design, fabrication, and preliminary characterization of modular reservoirs that demonstrate the virtue of this approach within the application context of transdermal insulin administration for diabetes management.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Enhancing School Mathematics Culturally: A Path of Reconciliation
ERIC Educational Resources Information Center
Aikenhead, Glen S.
2017-01-01
Culturally responsive or place-based school mathematics that focuses on Indigenous students has an established presence in the research literature. This culture-based innovation represents a historical shift from conventional approaches to mathematics education. Moreover, it has demonstratively advanced the academic achievement for both Indigenous…
NASA Astrophysics Data System (ADS)
Pereira, Carina; Dighe, Manjiri; Alessio, Adam M.
2018-02-01
Various Computer Aided Diagnosis (CAD) systems have been developed that characterize thyroid nodules using the features extracted from the B-mode ultrasound images and Shear Wave Elastography images (SWE). These features, however, are not perfect predictors of malignancy. In other domains, deep learning techniques such as Convolutional Neural Networks (CNNs) have outperformed conventional feature extraction based machine learning approaches. In general, fully trained CNNs require substantial volumes of data, motivating several efforts to use transfer learning with pre-trained CNNs. In this context, we sought to compare the performance of conventional feature extraction, fully trained CNNs, and transfer learning based, pre-trained CNNs for the detection of thyroid malignancy from ultrasound images. We compared these approaches applied to a data set of 964 B-mode and SWE images from 165 patients. The data were divided into 80% training/validation and 20% testing data. The highest accuracies achieved on the testing data for the conventional feature extraction, fully trained CNN, and pre-trained CNN were 0.80, 0.75, and 0.83 respectively. In this application, classification using a pre-trained network yielded the best performance, potentially due to the relatively limited sample size and sub-optimal architecture for the fully trained CNN.
Is the authoritative parenting model effective in changing oral hygiene behavior in adolescents?
Brukienė, Vilma; Aleksejūnienė, Jolanta
2012-12-01
This study examined whether the authoritative parenting model (APM) is more effective than conventional approaches for changing adolescent oral hygiene behavior. A total of 247 adolescents were recruited using a cluster random-sampling method. Subject groups were randomly allocated into an intervention group (APM-based interventions), a Control Group 1 (conventional dental education and behavior modification) or a Control Group 2 (conventional behavior modification). The results were assessed after 3 and 12 months. Oral hygiene level was assessed as percent dental plaque and the ratio of plaque percent change (RPC). At the 3-month follow-up, there were significant differences among the groups; the APM group had the largest decrease in plaque levels (24.5%), Control Group 1 showed a decrease in plaque levels of 15.4% and Control Group 2 showed an increase in plaque levels of 2.8%. At the 12-month follow-up, an improvement was observed in all groups, but there were no statistically significant differences among the groups. In the short term, the intervention based on the APM was more effective in changing adolescent oral hygiene behavior compared with the conventional approaches. The reasons for long-term positive change after discontinued interventions in control groups need to be explored in future studies.
Strategies for Implementing Cell-Free DNA Testing.
Cuckle, Howard
2016-06-01
Maternal plasma cell-free (cf) DNA testing has higher discriminatory power for aneuploidy than any conventional multi-marker screening test. Several strategies have been suggested for introducing it into clinical practice. Secondary cfDNA, restricted only to women with positive conventional screening test, is generally cost saving and minimizes the need for invasive prenatal diagnosis but leads to a small loss in detection. Primary cfDNA, replacing conventional screening or retaining the nuchal translucency scan, is not currently cost-effective for third-party payers. Contingent cfDNA, testing about 20% of women with the highest risks based on a conventional test, is the preferred approach. Copyright © 2016 Elsevier Inc. All rights reserved.
A New Approach to Teaching Business Writing: Writing across the Core--A Document Based Curriculum
ERIC Educational Resources Information Center
Hutchins, Teresa D.
2015-01-01
This paper describes the transition that the Anisfield School of Business of Ramapo College of New Jersey made from a conventional Writing Across the Curriculum approach to a Writing Across the Business Core approach. The impetus for the change is explained as well as the creation and design of the program. The document driven program is analyzed,…
Trailblazing Teacher Contract Agreement Adopted in Baltimore
ERIC Educational Resources Information Center
Education Digest: Essential Readings Condensed for Quick Review, 2011
2011-01-01
The Baltimore City Public Schools made national headlines late last year when the district adopted a new contract designed to take student learning and teacher professionalism to the next level. The three-year deal replaced conventional approaches to compensation--regular pay increases based on years in the system--with a new approach that gives…
The purpose of this research was to add to our knowledge of chlorine and monochloramine disinfectants, with regards to effects on the microbial communities in distribution systems. A whole metagenome-based approach using sophisticated molecular tools (e.g., next generation sequen...
Investigating Teachers' Views of Student-Centred Learning Approach
ERIC Educational Resources Information Center
Seng, Ernest Lim Kok
2014-01-01
Conventional learning is based on low levels of students' participation where students are rarely expected to ask questions or to challenge the theories of the academic. A paradigm shift in curriculum has resulted in implementing student-centred learning (SCL) approach, putting students as the centre of the learning process. This mode of…
ERIC Educational Resources Information Center
Ratelle, Catherine F.; Guay, Frederic; Larose, Simon; Senecal, Caroline
2004-01-01
The present study examined whether academic motivations, conceptualized from the stance of self-determination theory, fluctuate over time in a homogeneous or heterogeneous fashion during a school transition. Three objectives were pursued: First, motivational trajectories were studied using the conventional, homogeneous approach. Second, the…
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Haworth, Annette; Mears, Christopher; Betts, John M; Reynolds, Hayley M; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A
2016-01-07
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The 'biological optimisation' considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
Fuzzy logic based robotic controller
NASA Technical Reports Server (NTRS)
Attia, F.; Upadhyaya, M.
1994-01-01
Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.
NASA Astrophysics Data System (ADS)
Haworth, Annette; Mears, Christopher; Betts, John M.; Reynolds, Hayley M.; Tack, Guido; Leo, Kevin; Williams, Scott; Ebert, Martin A.
2016-01-01
Treatment plans for ten patients, initially treated with a conventional approach to low dose-rate brachytherapy (LDR, 145 Gy to entire prostate), were compared with plans for the same patients created with an inverse-optimisation planning process utilising a biologically-based objective. The ‘biological optimisation’ considered a non-uniform distribution of tumour cell density through the prostate based on known and expected locations of the tumour. Using dose planning-objectives derived from our previous biological-model validation study, the volume of the urethra receiving 125% of the conventional prescription (145 Gy) was reduced from a median value of 64% to less than 8% whilst maintaining high values of TCP. On average, the number of planned seeds was reduced from 85 to less than 75. The robustness of plans to random seed displacements needs to be carefully considered when using contemporary seed placement techniques. We conclude that an inverse planning approach to LDR treatments, based on a biological objective, has the potential to maintain high rates of tumour control whilst minimising dose to healthy tissue. In future, the radiobiological model will be informed using multi-parametric MRI to provide a personalised medicine approach.
Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures
Ozturk, Turgut; Mas, David; Rizzo, Piervincenzo
2018-01-01
The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling. PMID:29670034
Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures.
Sefa Orak, Mehmet; Nasrollahi, Amir; Ozturk, Turgut; Mas, David; Ferrer, Belen; Rizzo, Piervincenzo
2018-04-18
The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling.
Ruiz-Aracama, A; Lommen, A; Huber, M; van de Vijver, L; Hoogenboom, R
2012-01-01
The aim of this study was to apply an untargeted NMR and LC-MS-based metabolomics approach to detect potential differences between an organically and a conventionally produced feed, which caused statistically significant differences in growth, in the response to an immunological challenge and in the gene expression profiles in the small intestine of laying hens. A fractionation procedure was set up to create multiple fractions of the feed, which were subsequently analysed by NMR and UPLC-TOF/MS operating in positive mode. Comparison of the profiles revealed that the most apparent differences came from the isoflavones in the soy as well as a compound with a molecular mass of 441.202 (M + 1)⁺, which was identified as N,N'-diferuloylputrescine (DFP) and came from the corn. Whether the observed differences in effects are due to the higher levels of isoflavones and DFP is unclear, as is the fact whether the observed differences are typical for organic or conventional produced corn and soy. However, this study shows that this metabolomics approach is suitable for detecting potential differences between products, even in levels of compounds that would have been overlooked with a more targeted approach. As such, the method is suitable for a more systematic study on differences between conventionally and organically produced food.
Geoelectrical inference of mass transfer parameters using temporal moments
Day-Lewis, Frederick D.; Singha, Kamini
2008-01-01
We present an approach to infer mass transfer parameters based on (1) an analytical model that relates the temporal moments of mobile and bulk concentration and (2) a bicontinuum modification to Archie's law. Whereas conventional geochemical measurements preferentially sample from the mobile domain, electrical resistivity tomography (ERT) is sensitive to bulk electrical conductivity and, thus, electrolytic solute in both the mobile and immobile domains. We demonstrate the new approach, in which temporal moments of collocated mobile domain conductivity (i.e., conventional sampling) and ERT‐estimated bulk conductivity are used to calculate heterogeneous mass transfer rate and immobile porosity fractions in a series of numerical column experiments.
Lin, Yu-Chun; Phua, Siew Cheng; Lin, Benjamin; Inoue, Takanari
2013-01-01
Diffusion barriers are universal solutions for cells to achieve distinct organizations, compositions, and activities within a limited space. The influence of diffusion barriers on the spatiotemporal dynamics of signaling molecules often determines cellular physiology and functions. Over the years, the passive permeability barriers in various subcellular locales have been characterized using elaborate analytical techniques. In this review, we will summarize the current state of knowledge on the various passive permeability barriers present in mammalian cells. We will conclude with a description of several conventional techniques and one new approach based on chemically-inducible diffusion trap (C-IDT) for probing permeable barriers. PMID:23731778
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Basu Roy, Robindra; Brandt, Nicola; Moodie, Nicolette; Motlagh, Mitra; Rasanathan, Kumanan; Seddon, James A; Detjen, Anne K; Kampmann, Beate
2016-12-08
Until recently, paediatric tuberculosis (TB) has been relatively neglected by the broader TB and the maternal and child health communities. Human rights-based approaches to children affected by TB could be powerful; however, awareness and application of such strategies is not widespread. We summarize the current challenges faced by children affected by TB, including: consideration of their family context; the limitations of preventive, diagnostic and treatment options; paucity of paediatric-specific research; failure in implementation of interventions; and stigma. We examine the articles of the Convention on the Rights of the Child (CRC) and relate them to childhood TB. Specifically, we focus on the five core principles of the CRC: children's inherent right to life and States' duties towards their survival and development; children's right to enjoyment of the highest attainable standard of health; non-discrimination; best interests of the child; and respect for the views of the child. We highlight where children's rights are violated and how a human rights-based approach should be used as a tool to help children affected by TB, particularly in light of the Sustainable Development Goals and their focus on universality and leaving no one behind. The article aims to bridge the gap between those providing paediatric TB clinical care and conducting research, and those working in the fields of human rights policy and advocacy to promote a human rights-based approach for children affected by TB based upon the Convention on the Rights of the Child.
IT: An Effective Pedagogic Tool in the Teaching of Quantitative Methods in Management.
ERIC Educational Resources Information Center
Nadkami, Sanjay M.
1998-01-01
Examines the possibility of supplementing conventional pedagogic methods with information technology-based teaching aids in the instruction of quantitative methods to undergraduate students. Considers the case for a problem-based learning approach, and discusses the role of information technology. (Author/LRW)
New stimulation pattern design to improve P300-based matrix speller performance at high flash rate
NASA Astrophysics Data System (ADS)
Polprasert, Chantri; Kukieattikool, Pratana; Demeechai, Tanee; Ritcey, James A.; Siwamogsatham, Siwaruk
2013-06-01
Objective. We propose a new stimulation pattern design for the P300-based matrix speller aimed at increasing the minimum target-to-target interval (TTI). Approach. Inspired by the simplicity and strong performance of the conventional row-column (RC) stimulation, the proposed stimulation is obtained by modifying the RC stimulation through alternating row and column flashes which are selected based on the proposed design rules. The second flash of the double-flash components is then delayed for a number of flashing instants to increase the minimum TTI. The trade-off inherited in this approach is the reduced randomness within the stimulation pattern. Main results. We test the proposed stimulation pattern and compare its performance in terms of selection accuracy, raw and practical bit rates with the conventional RC flashing paradigm over several flash rates. By increasing the minimum TTI within the stimulation sequence, the proposed stimulation has more event-related potentials that can be identified compared to that of the conventional RC stimulations, as the flash rate increases. This leads to significant performance improvement in terms of the letter selection accuracy, the raw and practical bit rates over the conventional RC stimulation. Significance. These studies demonstrate that significant performance improvement over the RC stimulation is obtained without additional testing or training samples to compensate for low P300 amplitude at high flash rate. We show that our proposed stimulation is more robust to reduced signal strength due to the increased flash rate than the RC stimulation.
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-11-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for proton-proton scattering at 158 GeV are discussed.
Molecular-Based Detection Systems for Cryptosporidium Oocysts
The presentation describes on-going studies in collaboration with US EPA Region 2, 3, and the CDC on identifying sources of Cryptosporidium oocyst contamination in source waters using conventional and real-time PCR approaches.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
3D ultrasound imaging in image-guided intervention.
Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare
2014-01-01
Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.
Financial Planning for Information Technology: Conventional Approaches Need Not Apply.
ERIC Educational Resources Information Center
Falduto, Ellen F.
1999-01-01
Rapid advances in information technology have rendered conventional approaches to planning and budgeting useless, and no single method is universally appropriate. The most successful planning efforts are consistent with the institution's overall plan, and may combine conventional, opportunistic, and entrepreneurial approaches. Chief financial…
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
Vexler, Albert; Yu, Jihnhee
2018-04-13
A common statistical doctrine supported by many introductory courses and textbooks is that t-test type procedures based on normally distributed data points are anticipated to provide a standard in decision-making. In order to motivate scholars to examine this convention, we introduce a simple approach based on graphical tools of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. In this context, we propose employing a p-values-based method, taking into account the stochastic nature of p-values. We focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we extend the EPV concept to be considered in terms of the ROC curve technique. This provides expressive evaluations and visualizations of a wide spectrum of testing mechanisms' properties. We show that the conventional power characterization of tests is a partial aspect of the presented EPV/ROC technique. We desire that this explanation of the EPV/ROC approach convinces researchers of the usefulness of the EPV/ROC approach for depicting different characteristics of decision-making procedures, in light of the growing interest regarding correct p-values-based applications.
Fernandez-Nogueras Jimenez, Francisco J; Segura Fernandez-Nogueras, Miguel; Jouma Katati, Majed; Arraez Sanchez, Miguel Ángel; Roda Murillo, Olga; Sánchez Montesinos, Indalecio
2015-01-01
The role of robotic surgery is well established in various specialties such as urology and general surgery, but not in others such as neurosurgery and otolaryngology. In the case of surgery of the skull base, it has just emerged from an experimental phase. To investigate possible applications of the da Vinci surgical robot in transoral skull base surgery, comparing it with the authors' experience using conventional endoscopic transnasal surgery in the same region. A transoral transpalatal approach to the nasopharynx and medial skull base was performed on 4 cryopreserved cadaver heads. We used the da Vinci robot, a 30° standard endoscope 12mm thick, dual camera and dual illumination, Maryland forceps on the left terminal and curved scissors on the right, both 8mm thick. Bone drilling was performed manually. For the anatomical study of this region, we used 0.5cm axial slices from a plastinated cadaver head. Various skull base structures at different depths were reached with relative ease with the robot terminals Transoral robotic surgery with the da Vinci system provides potential advantages over conventional endoscopic transnasal surgery in the surgical approach to this region. Copyright © 2014 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
ERIC Educational Resources Information Center
Cornu, Christophe
2016-01-01
Homophobic and transphobic bullying in schools can have a serious effect on children and young people subjected to it at a crucial moment in their lives. It is an obstacle to the right to education, which is one of the basic universal human rights enshrined in the Universal Declaration of Human Rights and various United Nations Conventions. This…
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
A Shot Number Based Approach to Performance Analysis in Table Tennis
Yoshida, Kazuto; Yamada, Koshi
2017-01-01
Abstract The current study proposes a novel approach that improves the conventional performance analysis in table tennis by introducing the concept of frequency, or the number of shots, of each shot number. The improvements over the conventional method are as follows: better accuracy of the evaluation of skills and tactics of players, additional insights into scoring and returning skills and ease of understanding the results with a single criterion. The performance analysis of matches played at the 2012 Summer Olympics in London was conducted using the proposed method. The results showed some effects of the shot number and gender differences in table tennis. Furthermore, comparisons were made between Chinese players and players from other countries, what threw light on the skills and tactics of the Chinese players. The present findings demonstrate that the proposed method provides useful information and has some advantages over the conventional method. PMID:28210334
Both, Anna; Franke, Gefion C; Mirwald, Nadine; Lütgehetmann, Marc; Christner, Martin; Klupp, Eva-Maria; Belmar Campos, Cristina; Büttner, Henning; Aepfelbacher, Martin; Rohde, Holger
2017-12-01
Given constantly high or even rising incidences of both colonization and infection with vancomycin-resistant enterococci (VRE), timely and accurate identification of carriers in high-risk patient populations is of evident clinical importance. In this study, a two-tier approach consisting of PCR-based screening and cultural confirmation of positive results is compared to the conventional approach solely based on culture on selective media. The 2-tier strategy was highly consistent with the conventional approach, and was found to possess high sensitivity and specificity (93.1% and 100%, respectively). The introduction of the PCR-based combined VRE screening approach significantly (P<0.0001) reduced median overall time to result by 44.3hours. The effect was found to be most pronounced in VRE negative samples. Positive vanA PCR was highly consistent with culture (PPV: 92.0%, 95% CI: 72.5-98.6%, NPV: 99.6%, 95% CI: 98.9-99.6%), thus allowing for preliminary reporting of VRE detection. In contrast, a vanB positive PCR does not allow for preliminary reporting (PPV: 58.5%, 95% CI: 44.2-71.6%, NPV: 99.8%, 95% CI: 99.2-100%). The introduction of a molecular assay for rapid detection of VRE from rectal swabs combined with cultural confirmation proved to be reliable and time saving, especially in a setting of low VRE prevalence and predominance of vanA positive strains. Copyright © 2017 Elsevier Inc. All rights reserved.
This article presents a toxicologically-based risk assessment strategy for identifying the individual components or fractions of a complex mixture that are associated with its toxicity. The strategy relies on conventional component-based mixtures risk approaches such as dose addi...
Lights, Camera, Lesson: Teaching Literacy through Film
ERIC Educational Resources Information Center
Lipiner, Michael
2011-01-01
This in-depth case study explores a modern approach to education: the benefits of using film, technology and other creative, non-conventional pedagogical methods in the classroom to enhance students' understanding of literature. The study explores the positive effects of introducing a variety of visual-based (and auditory-based) teaching methods…
NASA Technical Reports Server (NTRS)
Underwood, Matthew C.
2017-01-01
To provide justification for equipping a fleet of aircraft with avionics capable of supporting trajectory-based operations, significant flight testing must be accomplished. However, equipping aircraft with these avionics and enabling technologies to communicate the clearances required for trajectory-based operations is cost-challenging using conventional avionics approaches. This paper describes an approach to minimize the costs and risks of flight testing these technologies in-situ, discusses the test-bed platform developed, and highlights results from a proof-of-concept flight test campaign that demonstrates the feasibility and efficiency of this approach.
ERIC Educational Resources Information Center
Ramachandran, Sridhar; Pandia Vadivu, P.
2014-01-01
This study examines the effectiveness of Neurocognitive Based Concept Mapping (NBCM) on students' learning in a science course. A total of 32 grade IX of high school Central Board of Secondary Education (CBSE) students were involved in this study by pre-test and post-test measurements. They were divided into two groups: NBCM group as an…
An approach for fixed coefficient RNS-based FIR filter
NASA Astrophysics Data System (ADS)
Srinivasa Reddy, Kotha; Sahoo, Subhendu Kumar
2017-08-01
In this work, an efficient new modular multiplication method for {2k-1, 2k, 2k+1-1} moduli set is proposed to implement a residue number system (RNS)-based fixed coefficient finite impulse response filter. The new multiplication approach reduces the number of partial products by using pre-loaded product block. The reduction in partial products with the proposed modular multiplication improves the clock frequency and reduces the area and power as compared with the conventional modular multiplication. Further, the present approach eliminates a binary number to residue number converter circuit, which is usually needed at the front end of RNS-based system. In this work, two fixed coefficient filter architectures with the new modular multiplication approach are proposed. The filters are implemented using Verilog hardware description language. The United Microelectronics Corporation 90 nm technology library has been used for synthesis and the results area, power and delay are obtained with the help of Cadence register transfer level compiler. The power delay product (PDP) is also considered for performance comparison among the proposed filters. One of the proposed architecture is found to improve PDP gain by 60.83% as compared with the filter implemented with conventional modular multiplier. The filters functionality is validated with the help of Altera DSP Builder.
Improving hospital cost accounting with activity-based costing.
Chan, Y C
1993-01-01
In this article, activity-based costing, an approach that has proved to be an improvement over the conventional costing system in product costing, is introduced. By combining activity-based costing with standard costing, health care administrators can better plan and control the costs of health services provided while ensuring that the organization's bottom line is healthy.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Yang, Yi; Tang, Shaojie
2013-03-01
Under the framework of model observer with signal and background exactly known (SKE/BKE), we investigate the detectability of differential phase contrast CT compared with that of the conventional attenuation-based CT. Using the channelized Hotelling observer and the radially symmetric difference-of-Gaussians channel template , we investigate the detectability index and its variation over the dimension of object and detector cells. The preliminary data show that the differential phase contrast CT outperforms the conventional attenuation-based CT significantly in the detectability index while both the object to be detected and the cell of detector used for data acquisition are relatively small. However, the differential phase contrast CT's dominance in the detectability index diminishes with increasing dimension of either object or detector cell, and virtually disappears while the dimension of object or detector cell approaches a threshold, respectively. It is hoped that the preliminary data reported in this paper may provide insightful understanding of the differential phase contrast CT's characteristic in the detectability index and its comparison with that of the conventional attenuation-based CT.
Performance Model of Intercity Ground Passenger Transportation Systems
DOT National Transportation Integrated Search
1975-08-01
A preliminary examination of the problems associated with mixed-traffic operations - conventional freight and high speed passenger trains - is presented. Approaches based upon a modest upgrading of existing signal systems are described. Potential cos...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
Nanocarrier-mediated drugs targeting cancer stem cells: an emerging delivery approach.
Malhi, Sarandeep; Gu, Xiaochen
2015-07-01
Cancer stem cells (CSCs) play an important role in the development of drug resistance, metastasis and recurrence. Current conventional therapies do not commonly target CSCs. Nanocarrier-based delivery systems targeting cancer cells have entered a new era of treatment, where specific targeting to CSCs may offer superior outcomes to efficient cancer therapies. This review discusses the involvement of CSCs in tumor progression and relevant mechanisms associated with CSCs resistance to conventional chemo- and radio-therapies. It highlights CSCs-targeted strategies that are either under evaluation or could be explored in the near future, with a focus on various nanocarrier-based delivery systems of drugs and nucleic acids to CSCs. Novel nanocarriers targeting CSCs are presented in a cancer-specific way to provide a current perspective on anti-CSCs therapeutics. The field of CSCs-targeted therapeutics is still emerging with a few small molecules and macromolecules currently proving efficacy in clinical trials. However considering the complexities of CSCs and existing delivery difficulties in conventional anticancer therapies, CSC-specific delivery systems would face tremendous technical and clinical challenges. Nanocarrier-based approaches have demonstrated significant potential in specific drug delivery and targeting; their success in CSCs-targeted drug delivery would not only significantly enhance anticancer treatment but also address current difficulties associated with cancer resistance, metastasis and recurrence.
Barss, Trevor S; Ainsley, Emily N; Claveria-Gonzalez, Francisca C; Luu, M John; Miller, Dylan J; Wiest, Matheus J; Collins, David F
2018-04-01
Neuromuscular electrical stimulation (NMES) is used to produce contractions to restore movement and reduce secondary complications for individuals experiencing motor impairment. NMES is conventionally delivered through a single pair of electrodes over a muscle belly or nerve trunk using short pulse durations and frequencies between 20 and 40Hz (conventional NMES). Unfortunately, the benefits and widespread use of conventional NMES are limited by contraction fatigability, which is in large part because of the nonphysiological way that contractions are generated. This review provides a summary of approaches designed to reduce fatigability during NMES, by using physiological principles that help minimize fatigability of voluntary contractions. First, relevant principles of the recruitment and discharge of motor units (MUs) inherent to voluntary contractions and conventional NMES are introduced, and the main mechanisms of fatigability for each contraction type are briefly discussed. A variety of NMES approaches are then described that were designed to reduce fatigability by generating contractions that more closely mimic voluntary contractions. These approaches include altering stimulation parameters, to recruit MUs in their physiological order, and stimulating through multiple electrodes, to reduce MU discharge rates. Although each approach has unique advantages and disadvantages, approaches that minimize MU discharge rates hold the most promise for imminent translation into rehabilitation practice. The way that NMES is currently delivered limits its utility as a rehabilitative tool. Reducing fatigability by delivering NMES in ways that better mimic voluntary contractions holds promise for optimizing the benefits and widespread use of NMES-based programs. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Adult soft tissue sarcomas: conventional therapies and molecularly targeted approaches.
Mocellin, Simone; Rossi, Carlo R; Brandes, Alba; Nitti, Donato
2006-02-01
The therapeutic approach to soft tissue sarcomas (STS) has evolved over the past two decades based on the results from randomized controlled trials, which are guiding physicians in the treatment decision-making process. Despite significant improvements in the control of local disease, a significant number of patients ultimately die of recurrent/metastatic disease following radical surgery due to a lack of effective adjuvant treatments. In addition, the characteristic chemoresistance of STS has compromised the therapeutic value of conventional antineoplastic agents in cases of unresectable advanced/metastatic disease. Therefore, novel therapeutic strategies are urgently needed to improve the prognosis of patients with STS. Recent advances in STS biology are paving the way to the development of molecularly targeted therapeutic strategies, the efficacy of which relies not only on the knowledge of the molecular mechanisms underlying cancer development/progression but also on the personalization of the therapeutic regimen according to the molecular features of individual tumours. In this work, we review the state-of-the-art of conventional treatments for STS and summarize the most promising findings in the development of molecularly targeted therapeutic approaches.
Knowledge-based system V and V in the Space Station Freedom program
NASA Technical Reports Server (NTRS)
Kelley, Keith; Hamilton, David; Culbert, Chris
1992-01-01
Knowledge Based Systems (KBS's) are expected to be heavily used in the Space Station Freedom Program (SSFP). Although SSFP Verification and Validation (V&V) requirements are based on the latest state-of-the-practice in software engineering technology, they may be insufficient for Knowledge Based Systems (KBS's); it is widely stated that there are differences in both approach and execution between KBS V&V and conventional software V&V. In order to better understand this issue, we have surveyed and/or interviewed developers from sixty expert system projects in order to understand the differences and difficulties in KBS V&V. We have used this survey results to analyze the SSFP V&V requirements for conventional software in order to determine which specific requirements are inappropriate for KBS V&V and why they are inappropriate. Further work will result in a set of recommendations that can be used either as guidelines for applying conventional software V&V requirements to KBS's or as modifications to extend the existing SSFP conventional software V&V requirements to include KBS requirements. The results of this work are significant to many projects, in addition to SSFP, which will involve KBS's.
Autonomous control systems - Architecture and fundamental issues
NASA Technical Reports Server (NTRS)
Antsaklis, P. J.; Passino, K. M.; Wang, S. J.
1988-01-01
A hierarchical functional autonomous controller architecture is introduced. In particular, the architecture for the control of future space vehicles is described in detail; it is designed to ensure the autonomous operation of the control system and it allows interaction with the pilot and crew/ground station, and the systems on board the autonomous vehicle. The fundamental issues in autonomous control system modeling and analysis are discussed. It is proposed to utilize a hybrid approach to modeling and analysis of autonomous systems. This will incorporate conventional control methods based on differential equations and techniques for the analysis of systems described with a symbolic formalism. In this way, the theory of conventional control can be fully utilized. It is stressed that autonomy is the design requirement and intelligent control methods appear at present, to offer some of the necessary tools to achieve autonomy. A conventional approach may evolve and replace some or all of the `intelligent' functions. It is shown that in addition to conventional controllers, the autonomous control system incorporates planning, learning, and FDI (fault detection and identification).
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
Low-loss curved subwavelength grating waveguide based on index engineering
NASA Astrophysics Data System (ADS)
Wang, Zheng; Xu, Xiaochuan; Fan, D. L.; Wang, Yaoguo; Chen, Ray T.
2016-03-01
Subwavelength grating (SWG) waveguide is an intriguing alternative to conventional optical waveguides due to its freedom to tune a few important waveguide properties such as dispersion and refractive index. Devices based on SWG waveguide have demonstrated impressive performances compared to those of conventional waveguides. However, the large loss of SWG waveguide bends jeopardizes their applications in integrated photonics circuits. In this work, we propose that a predistorted refractive index distribution in SWG waveguide bends can effectively decrease the mode mismatch noise and radiation loss simultaneously, and thus significantly reduce the bend loss. Here, we achieved the pre-distortion refractive index distribution by using trapezoidal silicon pillars. This geometry tuning approach is numerically optimized and experimentally demonstrated. The average insertion loss of a 5 μm SWG waveguide bend can be reduced drastically from 5.58 dB to 1.37 dB per 90° bend for quasi-TE polarization. In the future, the proposed approach can be readily adopted to enhance performance of an array of SWG waveguide-based photonics devices.
NASA Astrophysics Data System (ADS)
Park, J.; Yoo, K.
2013-12-01
For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.
Megger, Dominik A; Pott, Leona L; Rosowski, Kristin; Zülch, Birgit; Tautges, Stephanie; Bracht, Thilo; Sitek, Barbara
2017-01-01
Tandem mass tags (TMT) are usually introduced at the levels of isolated proteins or peptides. Here, for the first time, we report the labeling of whole cells and a critical evaluation of its performance in comparison to conventional labeling approaches. The obtained results indicated that TMT protein labeling using intact cells is generally possible, if it is coupled to a subsequent enrichment using anti-TMT antibody. The quantitative results were similar to those obtained after labeling of isolated proteins and both were found to be slightly complementary to peptide labeling. Furthermore, when using NHS-based TMT, no specificity towards cell surface proteins was observed in the case of cell labeling. In summary, the conducted study revealed first evidence for the general possibility of TMT cell labeling and highlighted limitations of NHS-based labeling reagents. Future studies should therefore focus on the synthesis and investigation of membrane impermeable TMTs to increase specificity towards cell surface proteins.
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
Restoring Balance for People with Cancer Through Integrative Oncology.
Fulop, Judy A; Grimone, Ania; Victorson, David
2017-06-01
Integrative Oncology incorporates conventional and western cancer treatment approaches with the best of ancient and traditional medicine including nutrition, supplements, Qigong, herbal medicine, mind-body practices, and more. This article offers a guiding conceptual paradigm from an integrative perspective based on the principles of balance and imbalance. An integrative approach is used to help improve quality of life, enhance lifestyle choices and mitigate symptoms and side effects from conventional treatments. By supporting the patient's mind, body and spirit throughout the cancer treatment journey, the primary care physician is in a key position to work with their patient's oncologist to provide supportive care and recommendations during cancer treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Lin, Yu-Chun; Phua, Siew Cheng; Lin, Benjamin; Inoue, Takanari
2013-08-01
Diffusion barriers are universal solutions for cells to achieve distinct organizations, compositions, and activities within a limited space. The influence of diffusion barriers on the spatiotemporal dynamics of signaling molecules often determines cellular physiology and functions. Over the years, the passive permeability barriers in various subcellular locales have been characterized using elaborate analytical techniques. In this review, we will summarize the current state of knowledge on the various passive permeability barriers present in mammalian cells. We will conclude with a description of several conventional techniques and one new approach based on chemically inducible diffusion trap (CIDT) for probing permeable barriers. Copyright © 2013 Elsevier Ltd. All rights reserved.
An ex vivo approach to botanical-drug interactions: a proof of concept study.
Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J; Markowitz, John S
2015-04-02
Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodology in applied ethnopharmacological research. This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal׳s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybin A, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An ex vivo approach to botanical-drug interactions: A proof of concept study
Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J.; Markowitz, John S.
2015-01-01
Ethnopharmacological relevance Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodoloy in applied ethnopharmacological research. Materials and Methods This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Results Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal’s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybinA, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Conclusions Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. PMID:25623616
Operation of High Speed Passenger Trains in Rail Freight Corridors
DOT National Transportation Integrated Search
1975-09-01
A preliminary examination of the problems associated with mixed-traffic operations - conventional freight and high speed passenger trains - is presented. Approaches based upon a modest upgrading of existing signal systems are described. Potential cos...
Andrews, Jason R.; Prajapati, Krishna G.; Eypper, Elizabeth; Shrestha, Poojan; Shakya, Mila; Pathak, Kamal R.; Joshi, Niva; Tiwari, Priyanka; Risal, Manisha; Koirala, Samir; Karkey, Abhilasha; Dongol, Sabina; Wen, Shawn; Smith, Amy B.; Maru, Duncan; Basnyat, Buddha; Baker, Stephen; Farrar, Jeremy; Ryan, Edward T.; Hohmann, Elizabeth; Arjyal, Amit
2013-01-01
Background In many rural areas at risk for enteric fever, there are few data on Salmonella enterica serotypes Typhi (S. Typhi) and Paratyphi (S. Paratyphi) incidence, due to limited laboratory capacity for microbiologic culture. Here, we describe an approach that permits recovery of the causative agents of enteric fever in such settings. This approach involves the use of an electricity-free incubator based upon use of phase-change materials. We compared this against conventional blood culture for detection of typhoidal Salmonella. Methodology/Principal Findings Three hundred and four patients with undifferentiated fever attending the outpatient and emergency departments of a public hospital in the Kathmandu Valley of Nepal were recruited. Conventional blood culture was compared against an electricity-free culture approach. Blood from 66 (21.7%) patients tested positive for a Gram-negative bacterium by at least one of the two methods. Sixty-five (21.4%) patients tested blood culture positive for S. Typhi (30; 9.9%) or S. Paratyphi A (35; 11.5%). From the 65 individuals with culture-confirmed enteric fever, 55 (84.6%) were identified by the conventional blood culture and 60 (92.3%) were identified by the experimental method. Median time-to-positivity was 2 days for both procedures. The experimental approach was falsely positive due to probable skin contaminants in 2 of 239 individuals (0.8%). The percentages of positive and negative agreement for diagnosis of enteric fever were 90.9% (95% CI: 80.0%–97.0%) and 96.0% (92.7%–98.1%), respectively. After initial incubation, Salmonella isolates could be readily recovered from blood culture bottles maintained at room temperature for six months. Conclusions/Significance A simple culture approach based upon a phase-change incubator can be used to isolate agents of enteric fever. This approach could be used as a surveillance tool to assess incidence and drug resistance of the etiologic agents of enteric fever in settings without reliable local access to electricity or local diagnostic microbiology laboratories. PMID:23853696
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
The emergence of "lifestyle medicine" as a structured approach for management of chronic disease.
Egger, Garry J; Binns, Andrew F; Rossner, Stephan R
2009-02-02
Chronic diseases with a lifestyle-based aetiology currently make up a significant proportion of primary care consultations, but management often falls between the demands of public and clinical health. A modified clinical approach, based around the concept of "lifestyle medicine", helps fill the gap by adding behavioural, motivational and environmental skills to conventional medical practice. When used in a multidisciplinary setting, lifestyle medicine offers potential cost and effectiveness benefits, which are beginning to be realised.
Environment Conscious Ceramics (Ecoceramics): An Eco-Friendly Route to Advanced Ceramic Materials
NASA Technical Reports Server (NTRS)
Singh, M.
2001-01-01
Environment conscious ceramics (Ecoceramics) are a new class of materials, which can be produced with renewable natural resources (wood) or wood wastes (wood sawdust). This technology provides an eco-friendly route to advanced ceramic materials. Ecoceramics have tailorable properties and behave like ceramic materials manufactured by conventional approaches. Silicon carbide-based ecoceramics have been fabricated by reactive infiltration of carbonaceous preforms by molten silicon or silicon-refractory metal alloys. The fabrication approach, microstructure, and mechanical properties of SiC-based ecoceramics are presented.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2014-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534
Richter, Craig G; Thompson, William H; Bosman, Conrado A; Fries, Pascal
2015-07-01
The quantification of covariance between neuronal activities (functional connectivity) requires the observation of correlated changes and therefore multiple observations. The strength of such neuronal correlations may itself undergo moment-by-moment fluctuations, which might e.g. lead to fluctuations in single-trial metrics such as reaction time (RT), or may co-fluctuate with the correlation between activity in other brain areas. Yet, quantifying the relation between moment-by-moment co-fluctuations in neuronal correlations is precluded by the fact that neuronal correlations are not defined per single observation. The proposed solution quantifies this relation by first calculating neuronal correlations for all leave-one-out subsamples (i.e. the jackknife replications of all observations) and then correlating these values. Because the correlation is calculated between jackknife replications, we address this approach as jackknife correlation (JC). First, we demonstrate the equivalence of JC to conventional correlation for simulated paired data that are defined per observation and therefore allow the calculation of conventional correlation. While the JC recovers the conventional correlation precisely, alternative approaches, like sorting-and-binning, result in detrimental effects of the analysis parameters. We then explore the case of relating two spectral correlation metrics, like coherence, that require multiple observation epochs, where the only viable alternative analysis approaches are based on some form of epoch subdivision, which results in reduced spectral resolution and poor spectral estimators. We show that JC outperforms these approaches, particularly for short epoch lengths, without sacrificing any spectral resolution. Finally, we note that the JC can be applied to relate fluctuations in any smooth metric that is not defined on single observations. Copyright © 2015. Published by Elsevier Inc.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2015-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
Emmerton, Lynne; Rizk, Mariam F S; Bedford, Graham; Lalor, Daniel
2015-02-01
Confusion between similar drug names can cause harmful medication errors. Similar drug names can be visually differentiated using a typographical technique known as Tall Man lettering. While international conventions exist to derive Tall Man representation for drug names, there has been no national standard developed in Australia. This paper describes the derivation of a risk-based, standardized approach for use of Tall Man lettering in Australia, and known as National Tall Man Lettering. A three-stage approach was applied. An Australian list of similar drug names was systematically compiled from the literature and clinical error reports. Secondly, drug name pairs were prioritized using a risk matrix based on the likelihood of name confusion (a four-component score) vs. consensus ratings of the potential severity of the confusion by 31 expert reviewers. The mid-type Tall Man convention was then applied to derive the typography for the highest priority drug pair names. Of 250 pairs of confusable Australian drug names, comprising 341 discrete names, 35 pairs were identified by the matrix as an 'extreme' risk if confused. The mid-type Tall Man convention was successfully applied to the majority of the prioritized drugs; some adaption of the convention was required. This systematic process for identification of confusable drug names and associated risk, followed by application of a convention for Tall Man lettering, has produced a standard now endorsed for use in clinical settings in Australia. Periodic updating is recommended to accommodate new drug names and error reports. © 2014 John Wiley & Sons, Ltd.
Duda, Alfred M
2003-12-29
Interlinked crises of land degradation, food security, ecosystem decline, water quality and water flow depletion stand in the way of poverty reduction and sustainable development. These crises are made worse by increased fluctuations in climatic regimes. Single-purpose international conventions address these crises in a piecemeal, sectoral fashion and may not meet their objectives without greater attention to policy, legal, and institutional reforms related to: (i) balancing competing uses of land and water resources within hydrologic units; (ii) adopting integrated approaches to management; and (iii) establishing effective governance institutions for adaptive management within transboundary basins. This paper describes this global challenge and argues that peace, stability and security are all at stake when integrated approaches are not used. The paper presents encouraging results from a decade of transboundary water projects supported by the Global Environment Facility in developing countries that test practical applications of processes for facilitating reforms related to land and water that are underpinned by science-based approaches. Case studies of using these participative processes are described that collectively assist in the transition to integrated management. A new imperative for incorporating interlinkages among food, water, and environment security at the basin level is identified.
Duda, Alfred M
2003-01-01
Interlinked crises of land degradation, food security, ecosystem decline, water quality and water flow depletion stand in the way of poverty reduction and sustainable development. These crises are made worse by increased fluctuations in climatic regimes. Single-purpose international conventions address these crises in a piecemeal, sectoral fashion and may not meet their objectives without greater attention to policy, legal, and institutional reforms related to: (i) balancing competing uses of land and water resources within hydrologic units; (ii) adopting integrated approaches to management; and (iii) establishing effective governance institutions for adaptive management within transboundary basins. This paper describes this global challenge and argues that peace, stability and security are all at stake when integrated approaches are not used. The paper presents encouraging results from a decade of transboundary water projects supported by the Global Environment Facility in developing countries that test practical applications of processes for facilitating reforms related to land and water that are underpinned by science-based approaches. Case studies of using these participative processes are described that collectively assist in the transition to integrated management. A new imperative for incorporating interlinkages among food, water, and environment security at the basin level is identified. PMID:14728798
NASIS data base management system: IBM 360 TSS implementation. Volume 1: Installation standards
NASA Technical Reports Server (NTRS)
1973-01-01
The installation standards for the (NASIS) data base management system are presented. The standard approach to preparing systems documentation and the program design and coding rules and conventions are outlines. Included are instructions for preparing all major specifications and suggestions for improving the quality and efficency of the programming task.
USDA-ARS?s Scientific Manuscript database
Molecular detection of bacterial pathogens based on LAMP methods is a faster and simpler approach than conventional culture methods. Although different LAMP-based methods for pathogenic bacterial detection are available, a systematic comparison of these different LAMP assays has not been performed. ...
Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms
ERIC Educational Resources Information Center
Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy
2005-01-01
Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…
Gassner, C; Karlsson, R; Lipsmeier, F; Moelleken, J
2018-05-30
Previously we have introduced two SPR-based assay principles (dual-binding assay and bridging assay), which allow the determination of two out of three possible interaction parameters for bispecific molecules within one assay setup: two individual interactions to both targets, and/or one simultaneous/overall interaction, which potentially reflects the inter-dependency of both individual binding events. However, activity and similarity are determined by comparing report points over a concentration range, which also mirrors the way data is generated by conventional ELISA-based methods So far, binding kinetics have not been specifically considered in generic approaches for activity assessment. Here, we introduce an improved slope-ratio model which, together with a sensorgram comparison based similarity assessment, allows the development of a detailed, USP-conformal ligand binding assay using only a single sample concentration. We compare this novel analysis method to the usual concentration-range approach for both SPR-based assay principles and discuss its impact on data quality and increased sample throughput. Copyright © 2018 Elsevier B.V. All rights reserved.
Problem based learning approaches to the technology education of physical therapy students.
Castro-Sánchez, Adelaida M; Aguilar-Ferrándiz, María Encarnación M E; Matarán-Peñarrocha, Guillermo A Ga; Iglesias-Alonso, Alberto A; Fernández-Fernández, Maria Jesus M J; Moreno-Lorenzo, Carmen C
2012-01-01
Problem-Based Learning (PBL) is a whole-curriculum concept. This study aimed to compare learning preferences and strategies between physical therapy students taught by PBL and those receiving conventional lectures on massage therapy, trauma physical therapy, and electrotherapy, hydrotherapy, and thermotherapy. This quasi-experimental study included 182 male and female students on physical therapy diploma courses at three universities in Andalusia (Spain). The Canfield Learning Skills Inventory (CLSI) was used to assess learning strategies and the Approaches to Study Skills Inventory for Students (ASSIST) to analyze study preferences. At the end of the academic year 2009/10, physical therapy students taught by PBL considered the most important learning strategies to be group work, study organization, relationship of ideas, and academic results. In comparison to conventionally taught counterparts, they considered that PBL reduced lack of purpose, memorizing without relating, the law of minimum effort, and fear of failure. Among these PBL students, the most highly rated study preferences were: organization of course tasks, cordial interaction with the teacher, learning by reading and images, and direct hands-on experience. For these physical therapy students, PBL facilitates learning strategies and study preferences in comparison to conventional teaching.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction.
Boeker, Martin; Andel, Peter; Vach, Werner; Frankenschmidt, Alexander
2013-01-01
Background When compared with more traditional instructional methods, Game-based e-learning (GbEl) promises a higher motivation of learners by presenting contents in an interactive, rule-based and competitive way. Most recent systematic reviews and meta-analysis of studies on Game-based learning and GbEl in the medical professions have shown limited effects of these instructional methods. Objectives To compare the effectiveness on the learning outcome of a Game-based e-learning (GbEl) instruction with a conventional script-based instruction in the teaching of phase contrast microscopy urinalysis under routine training conditions of undergraduate medical students. Methods A randomized controlled trial was conducted with 145 medical students in their third year of training in the Department of Urology at the University Medical Center Freiburg, Germany. 82 subjects where allocated for training with an educational adventure-game (GbEl group) and 69 subjects for conventional training with a written script-based approach (script group). Learning outcome was measured with a 34 item single choice test. Students' attitudes were collected by a questionnaire regarding fun with the training, motivation to continue the training and self-assessment of acquired knowledge. Results The students in the GbEl group achieved significantly better results in the cognitive knowledge test than the students in the script group: the mean score was 28.6 for the GbEl group and 26.0 for the script group of a total of 34.0 points with a Cohen's d effect size of 0.71 (ITT analysis). Attitudes towards the recent learning experience were significantly more positive with GbEl. Students reported to have more fun while learning with the game when compared to the script-based approach. Conclusions Game-based e-learning is more effective than a script-based approach for the training of urinalysis in regard to cognitive learning outcome and has a high positive motivational impact on learning. Game-based e-learning can be used as an effective teaching method for self-instruction. PMID:24349257
Harmonic wavelet packet transform for on-line system health diagnosis
NASA Astrophysics Data System (ADS)
Yan, Ruqiang; Gao, Robert X.
2004-07-01
This paper presents a new approach to on-line health diagnosis of mechanical systems, based on the wavelet packet transform. Specifically, signals acquired from vibration sensors are decomposed into sub-bands by means of the discrete harmonic wavelet packet transform (DHWPT). Based on the Fisher linear discriminant criterion, features in the selected sub-bands are then used as inputs to three classifiers (Nearest Neighbor rule-based and two Neural Network-based), for system health condition assessment. Experimental results have confirmed that, comparing to the conventional approach where statistical parameters from raw signals are used, the presented approach enabled higher signal-to-noise ratio for more effective and intelligent use of the sensory information, thus leading to more accurate system health diagnosis.
A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, Horst; Laurischkat, Roman; Zhu Junhong
One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
de Almeida, Marcos E; Koru, Ozgur; Steurer, Francis; Herwaldt, Barbara L; da Silva, Alexandre J
2017-01-01
Leishmaniasis in humans is caused by Leishmania spp. in the subgenera Leishmania and Viannia Species identification often has clinical relevance. Until recently, our laboratory relied on conventional PCR amplification of the internal transcribed spacer 2 (ITS2) region (ITS2-PCR) followed by sequencing analysis of the PCR product to differentiate Leishmania spp. Here we describe a novel real-time quantitative PCR (qPCR) approach based on the SYBR green technology (LSG-qPCR), which uses genus-specific primers that target the ITS1 region and amplify DNA from at least 10 Leishmania spp., followed by analysis of the melting temperature (T m ) of the amplicons on qPCR platforms (the Mx3000P qPCR system [Stratagene-Agilent] and the 7500 real-time PCR system [ABI Life Technologies]). We initially evaluated the assay by testing reference Leishmania isolates and comparing the results with those from the conventional ITS2-PCR approach. Then we compared the results from the real-time and conventional molecular approaches for clinical specimens from 1,051 patients submitted to the reference laboratory of the Centers for Disease Control and Prevention for Leishmania diagnostic testing. Specimens from 477 patients tested positive for Leishmania spp. with the LSG-qPCR assay, specimens from 465 of these 477 patients also tested positive with the conventional ITS2-PCR approach, and specimens from 10 of these 465 patients had positive results because of retesting prompted by LSG-qPCR positivity. On the basis of the T m values of the LSG-qPCR amplicons from reference and clinical specimens, we were able to differentiate four groups of Leishmania parasites: the Viannia subgenus in aggregate; the Leishmania (Leishmania) donovani complex in aggregate; the species L (L) tropica; and the species L (L) mexicana, L (L) amazonensis, L (L) major, and L (L) aethiopica in aggregate. Copyright © 2016 American Society for Microbiology.
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
On the BV formalism of open superstring field theory in the large Hilbert space
NASA Astrophysics Data System (ADS)
Matsunaga, Hiroaki; Nomura, Mitsuru
2018-05-01
We construct several BV master actions for open superstring field theory in the large Hilbert space. First, we show that a naive use of the conventional BV approach breaks down at the third order of the antifield number expansion, although it enables us to define a simple "string antibracket" taking the Darboux form as spacetime antibrackets. This fact implies that in the large Hilbert space, "string fields-antifields" should be reassembled to obtain master actions in a simple manner. We determine the assembly of the string anti-fields on the basis of Berkovits' constrained BV approach, and give solutions to the master equation defined by Dirac antibrackets on the constrained string field-antifield space. It is expected that partial gauge-fixing enables us to relate superstring field theories based on the large and small Hilbert spaces directly: reassembling string fields-antifields is rather natural from this point of view. Finally, inspired by these results, we revisit the conventional BV approach and construct a BV master action based on the minimal set of string fields-antifields.
Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui
2018-01-20
We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.
Faye, Alexandrine; Jacquin-Courtois, Sophie; Osiurak, François
2018-03-01
The purpose of this study was to deepen our understanding of the cognitive bases of human tool use based on the technical reasoning hypothesis (i.e., the reasoning-based approach). This approach assumes that tool use is supported by the ability to reason about an object's physical properties (e.g., length, weight, strength, etc.) to perform mechanical actions (e.g., lever). In this framework, an important issue is to understand whether left-brain-damaged (LBD) individuals with tool-use deficits are still able to estimate the physical object's properties necessary to use the tool. Eleven LBD patients and 12 control participants performed 3 original experimental tasks: Use-Length (visual evaluation of the length of a stick to bring down a target), Visual-Length (to visually compare objects of different lengths) and Addition-Length (to visually compare added lengths). Participants were also tested on conventional tasks: Familiar Tool Use and Mechanical Problem-Solving (novel tools). LBD patients had more difficulties than controls on both conventional tasks. No significant differences were observed for the 3 experimental tasks. These results extend the reasoning-based approach, stressing that it might not be the representation of length that is impaired in LBD patients, but rather the ability to generate mechanical actions based on physical object properties. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An engineering approach to the use of expert systems technology in avionics applications
NASA Technical Reports Server (NTRS)
Duke, E. L.; Regenie, V. A.; Brazee, M.; Brumbaugh, R. W.
1986-01-01
The concept of using a knowledge compiler to transform the knowledge base and inference mechanism of an expert system into a conventional program is presented. The need to accommodate real-time systems requirements in applications such as embedded avionics is outlined. Expert systems and a brief comparison of expert systems and conventional programs are reviewed. Avionics applications of expert systems are discussed before the discussions of applying the proposed concept to example systems using forward and backward chaining.
2013-02-05
could be a promising catalyst for PEM fuel cells. Introduction: Proton exchange membrane fuel cells ( PEMFCs ) have found wide potential...Unfortunately, due to their high cost and low lifespan, wide-scale commercialization of PEMFCs has been greatly impeded and much effort has been made to...lower its cost as well as to improve its durability over time. In an attempt to alleviate the high-cost associated with conventional PEMFC catalysts
Linear energy transfer incorporated intensity modulated proton therapy optimization
NASA Astrophysics Data System (ADS)
Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe
2018-01-01
The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
Genome-based approaches to develop vaccines against bacterial pathogens.
Serruto, Davide; Serino, Laura; Masignani, Vega; Pizza, Mariagrazia
2009-05-26
Bacterial infectious diseases remain the single most important threat to health worldwide. Although conventional vaccinology approaches were successful in conferring protection against several diseases, they failed to provide efficacious solutions against many others. The advent of whole-genome sequencing changed the way to think about vaccine development, enabling the targeting of possible vaccine candidates starting from the genomic information of a single bacterial isolate, with a process named reverse vaccinology. As the genomic era progressed, reverse vaccinology has evolved with a pan-genome approach and multi-strain genome analysis became fundamental for the design of universal vaccines. This review describes the applications of genome-based approaches in the development of new vaccines against bacterial pathogens.
NASA Astrophysics Data System (ADS)
Milic, Vladimir; Kasac, Josip; Novakovic, Branko
2015-10-01
This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.
Zhou, Hufeng; Gao, Shangzhi; Nguyen, Nam Ninh; Fan, Mengyuan; Jin, Jingjing; Liu, Bing; Zhao, Liang; Xiong, Geng; Tan, Min; Li, Shijun; Wong, Limsoon
2014-04-08
H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies.
Bio-barcode gel assay for microRNA
NASA Astrophysics Data System (ADS)
Lee, Hyojin; Park, Jeong-Eun; Nam, Jwa-Min
2014-02-01
MicroRNA has been identified as a potential biomarker because expression level of microRNA is correlated with various cancers. Its detection at low concentrations would be highly beneficial for cancer diagnosis. Here, we develop a new type of a DNA-modified gold nanoparticle-based bio-barcode assay that uses a conventional gel electrophoresis platform and potassium cyanide chemistry and show this assay can detect microRNA at aM levels without enzymatic amplification. It is also shown that single-base-mismatched microRNA can be differentiated from perfectly matched microRNA and the multiplexed detection of various combinations of microRNA sequences is possible with this approach. Finally, differently expressed microRNA levels are selectively detected from cancer cells using the bio-barcode gel assay, and the results are compared with conventional polymerase chain reaction-based results. The method and results shown herein pave the way for practical use of a conventional gel electrophoresis for detecting biomolecules of interest even at aM level without polymerase chain reaction amplification.
Empirical projection-based basis-component decomposition method
NASA Astrophysics Data System (ADS)
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
NASA Astrophysics Data System (ADS)
Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati
2018-04-01
The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.
2014-01-01
Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
ERIC Educational Resources Information Center
Berg, Ronan M. G.; Plovsing, Ronni R.; Damgaard, Morten
2012-01-01
Quiz-based and collaborative teaching strategies have previously been found to be efficient for the improving meaningful learning of physiology during lectures. These approaches have, however, not been investigated during laboratory exercises. In the present study, we compared the impact of solving quizzes individually and in groups with…
ERIC Educational Resources Information Center
Huang, Hsiu-Mei; Rauch, Ulrich; Liaw, Shu-Sheng
2010-01-01
The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the…
Children Have the Right to Have Rights
ERIC Educational Resources Information Center
Brandao, Caius
2007-01-01
The United Nations Convention on the Rights of the Child (CRC) has forged a fundamental shift of paradigm in program and public policy design. Whereas in most countries the needs-based approach has historically guided services and policies for children, the CRC sets out a new perspective based on the human rights of all children. This perspective…
SVM and PCA Based Learning Feature Classification Approaches for E-Learning System
ERIC Educational Resources Information Center
Khamparia, Aditya; Pandey, Babita
2018-01-01
E-learning and online education has made great improvements in the recent past. It has shifted the teaching paradigm from conventional classroom learning to dynamic web based learning. Due to this, a dynamic learning material has been delivered to learners, instead ofstatic content, according to their skills, needs and preferences. In this…
Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging
Guo, Qijia; Wang, Jie; Chang, Tianying
2017-01-01
The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083
Damage free integration of ultralow-k dielectrics by template replacement approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, L.; De Gendt, S.; Department of Chemistry, Katholieke Universiteit Leuven, 3000 Leuven
2015-08-31
Cu/low-k integration by conventional damascene approach is becoming increasingly difficult as critical dimensions scale down. An alternative integration scheme is studied based on the replacement of a sacrificial template by ultralow-k dielectric. A metal structure is first formed by patterning a template material. After template removal, a k = 2.31 spin-on type of porous low-k dielectric is deposited onto the patterned metal lines. The chemical and electrical properties of spin-on dielectrics are studied on blanket wafers, indicating that during hard bake, most porogen is removed within few minutes, but 120 min are required to achieve the lowest k-value. The effective dielectric constantmore » of the gap-fill low-k is investigated on a 45 nm ½ pitch Meander-Fork structure, leading to k{sub eff} below 2.4. The proposed approach solves the two major challenges in conventional Cu/low-k damascene integration approach: low-k plasma damage and metal penetration during barrier deposition on porous materials.« less
Why children's rights are central to international child health.
Waterston, T; Goldhagen, J
2007-02-01
The UN Convention on the Rights of the Child provides a framework for improving children's lives around the world. It covers both individual child health practice and public health and provides a unique and child-centred approach to paediatric problems. The Convention applies to most child health problems and the articles are grouped into protection, provision and participation. Examples of the first are the right to protection from abuse, from economic exploitation and from illicit drugs. We examine one particular problem in each of these categories, specifically child labour, services for children with a disability and violence against children. The role of the paedialrician in applying a children's rights approach is discussed. Children's rights are increasingly being accepted around the world but still there is much more rhetoric paid to their value than genuine enforcement. Paediatricians can make a difference to the status of children worldwide by adopting a rights-based approach.
NASA Astrophysics Data System (ADS)
Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2018-01-01
Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.
Anorexia: Highlights in Traditional Persian medicine and conventional medicine
Nimrouzi, Majid; Zarshenas, Mohammad Mehdi
2018-01-01
Objective: Anorexia and impaired appetite (Dysorexia) are common symptoms with varying causes, and often need no serious medical intervention. Anorexia nervosa (AN) is a chronic psychiatric disease with a high mortality rate. In Traditional Persian Medicine (TPM), anorexia is a condition in which anorexic patients lose appetite due to dystemperament. This review aims to discuss the common points of traditional and conventional approaches rather than introducing Persian medical recommendations suitable for nowadays use. Materials and Methods: For this purpose, Avicenna's Canon of Medicine, main TPM resources and important databases were reviewed using the related keywords. Results: Despite complex hormonal explanation, etiology of AN in conventional approach is not completely understood. In TPM approach, the etiology and recommended interventions are thoroughly defined based on humoral pathophysiology. In TPM approach, disease states are regarded as the result of imbalances in organs’ temperament and humors. In anorexia with simple dystemperament, the physician should attempt to balance the temperament using foods and medicaments which have opposite quality of temperament. Lifestyle, spiritual diseases (neuro – psychological) and gastrointestinal worms are the other causes for reducing appetite. Also, medicines and foods with warm temperaments (such as Pea soup and Mustard) are useful for these patients (cold temperament). Conclusion: Although the pathophysiology of AN in TPM is different in comparison with conventional views, TPM criteria for treatment this disorder is similar to those of current medicine. Recommending to have spiritual support and a healthy lifestyle are common in both views. Simple safe interventions recommended by TPM may be considered as alternative medical modalities after being confirmed by well-designed clinical trials. PMID:29387569
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.
Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz
2017-01-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.
Klantsataya, Elizaveta; Jia, Peipei; Ebendorff-Heidepriem, Heike; Monro, Tanya M.; François, Alexandre
2016-01-01
Surface Plasmon Resonance (SPR) fiber sensor research has grown since the first demonstration over 20 year ago into a rich and diverse field with a wide range of optical fiber architectures, plasmonic coatings, and excitation and interrogation methods. Yet, the large diversity of SPR fiber sensor designs has made it difficult to understand the advantages of each approach. Here, we review SPR fiber sensor architectures, covering the latest developments from optical fiber geometries to plasmonic coatings. By developing a systematic approach to fiber-based SPR designs, we identify and discuss future research opportunities based on a performance comparison of the different approaches for sensing applications. PMID:28025532
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
Pani, Silvia; Saifuddin, Sarene C; Ferreira, Filipa I M; Henthorn, Nicholas; Seller, Paul; Sellin, Paul J; Stratmann, Philipp; Veale, Matthew C; Wilson, Matthew D; Cernik, Robert J
2017-09-01
Contrast-enhanced digital mammography (CEDM) is an alternative to conventional X-ray mammography for imaging dense breasts. However, conventional approaches to CEDM require a double exposure of the patient, implying higher dose, and risk of incorrect image registration due to motion artifacts. A novel approach is presented, based on hyperspectral imaging, where a detector combining positional and high-resolution spectral information (in this case based on Cadmium Telluride) is used. This allows simultaneous acquisition of the two images required for CEDM. The approach was tested on a custom breast-equivalent phantom containing iodinated contrast agent (Niopam 150®). Two algorithms were used to obtain images of the contrast agent distribution: K-edge subtraction (KES), providing images of the distribution of the contrast agent with the background structures removed, and a dual-energy (DE) algorithm, providing an iodine-equivalent image and a water-equivalent image. The high energy resolution of the detector allowed the selection of two close-by energies, maximising the signal in KES images, and enhancing the visibility of details with the low surface concentration of contrast agent. DE performed consistently better than KES in terms of contrast-to-noise ratio of the details; moreover, it allowed a correct reconstruction of the surface concentration of the contrast agent in the iodine image. Comparison with CEDM with a conventional detector proved the superior performance of hyperspectral CEDM in terms of the image quality/dose tradeoff.
RNA interference in the clinic: challenges and future directions
Pecot, Chad V.; Calin, George A.; Coleman, Robert L.; Lopez-Berestein, Gabriel; Sood, Anil K.
2011-01-01
Inherent difficulties with blocking many desirable targets using conventional approaches have prompted many to consider using RNA interference (RNAi) as a therapeutic approach. Although exploitation of RNAi has immense potential as a cancer therapeutic, many physiological obstacles stand in the way of successful and efficient delivery. This Review explores current challenges to the development of synthetic RNAi-based therapies and considers new approaches to circumvent biological barriers, to avoid intolerable side effects and to achieve controlled and sustained release. PMID:21160526
NASA Astrophysics Data System (ADS)
Ishii, Hiroyuki; Kobayashi, Nobuhiko; Hirose, Kenji
2017-01-01
We present a wave-packet dynamical approach to charge transport using maximally localized Wannier functions based on density functional theory including van der Waals interactions. We apply it to the transport properties of pentacene and rubrene single crystals and show the temperature-dependent natures from bandlike to thermally activated behaviors as a function of the magnitude of external static disorder. We compare the results with those obtained by the conventional band and hopping models and experiments.
Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.
Griffith, Daniel A; Peres-Neto, Pedro R
2006-10-01
Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.
Wilkinson, Jeffrey S; Barake, Walid; Smith, Chris; Thakrar, Amar; Johri, Amer M
2016-08-01
Advances in ultrasonographic technology have allowed for hand-held cardiac ultrasonography (HHCU) units that fit into a physician's laboratory coat. Recently, studies to educate internal medicine residents have shown promise. The optimal duration and methodology for teaching HHCU skills has not been established. Over a 1-year period, internal medicine residents were recruited during their cardiology ward rotation into a single-centre nonblinded randomized trial. The 2 condensed teaching strategies were (1) a conventional ward-based program and (2) a technology-driven simulation-based strategy. Outcomes were evaluated by (1) an objective structured clinical examination (OSCE) to evaluate interpretation ability (assessing both type I and type II error rates) and (2) demonstration of HHCU skills graded by 2 level III echocardiographers. Twenty-four internal medicine residents were randomized. After teaching, the conventional teaching group had a significant absolute increase in the ability to make a singular correct diagnosis (20%; P < 0.001). In the technology arm, making a singular correct diagnosis increased 24% from baseline (P = 0.001). Interpretation skill was not significantly different between groups. The false-positive rate increased by an absolute 14% and 17% in the conventional and technology groups, respectively (P = 0.079 and P = 0.008). Our findings suggest that HHCU interpretation skills improve after either a conventional ward-based or a technology-driven approach. However, our study emphasizes the important limitations of both teaching programs, because we detected a trend toward an increase in the false-positive rate after both approaches. This suggests that a short duration of training may not be sufficient for HHCU to be performed in a safe manner. Copyright © 2016 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
An Approach to V&V of Embedded Adaptive Systems
NASA Technical Reports Server (NTRS)
Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth
2004-01-01
Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,
Social Image Tag Ranking by Two-View Learning
NASA Astrophysics Data System (ADS)
Zhuang, Jinfeng; Hoi, Steven C. H.
Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.
Feedback loops and temporal misalignment in component-based hydrologic modeling
NASA Astrophysics Data System (ADS)
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
A High Performance Impedance-based Platform for Evaporation Rate Detection.
Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng
2016-10-17
This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.
Ceramic matrix composite behavior -- Computational simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.
Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less
Weak wide-band signal detection method based on small-scale periodic state of Duffing oscillator
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Xiao-peng; Li, Ping; Hao, Xin-hong
2018-03-01
The conventional Duffing oscillator weak signal detection method, which is based on a strong reference signal, has inherent deficiencies. To address these issues, the characteristics of the Duffing oscillatorʼs phase trajectory in a small-scale periodic state are analyzed by introducing the theory of stopping oscillation system. Based on this approach, a novel Duffing oscillator weak wide-band signal detection method is proposed. In this novel method, the reference signal is discarded, and the to-be-detected signal is directly used as a driving force. By calculating the cosine function of a phase space angle, a single Duffing oscillator can be used for weak wide-band signal detection instead of an array of uncoupled Duffing oscillators. Simulation results indicate that, compared with the conventional Duffing oscillator detection method, this approach performs better in frequency detection intervals, and reduces the signal-to-noise ratio detection threshold, while improving the real-time performance of the system. Project supported by the National Natural Science Foundation of China (Grant No. 61673066).
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Kainz, Hans; Lloyd, David G; Walsh, Henry P J; Carty, Christopher P
2016-05-01
In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5°, differences in pelvis angles at pre-defined gait events below 10°). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30°, differences in pelvis angles at pre-defined gait events up to 45°). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame. Copyright © 2016 Elsevier B.V. All rights reserved.
Rotating electrical machines: Poynting flow
NASA Astrophysics Data System (ADS)
Donaghy-Spargo, C.
2017-09-01
This paper presents a complementary approach to the traditional Lorentz and Faraday approaches that are typically adopted in the classroom when teaching the fundamentals of electrical machines—motors and generators. The approach adopted is based upon the Poynting vector, which illustrates the ‘flow’ of electromagnetic energy. It is shown through simple vector analysis that the energy-flux density flow approach can provide insight into the operation of electrical machines and it is also shown that the results are in agreement with conventional Maxwell stress-based theory. The advantage of this approach is its complementary completion of the physical picture regarding the electromechanical energy conversion process—it is also a means of maintaining student interest in this subject and as an unconventional application of the Poynting vector during normal study of electromagnetism.
[Psychiatric Rehabilitation - From the Linear Continuum Approach Towards Supported Inclusion].
Richter, Dirk; Hertig, Res; Hoffmann, Holger
2016-11-01
Background: For many decades, psychiatric rehabilitation in the German-speaking countries is following a conventional linear continuum approach. Methods: Recent developments in important fields related to psychiatric rehabilitation (UN Convention on the Rights of People with Disabilities, theory of rehabilitation, empirical research) are reviewed. Results: Common to all developments in the reviewed fields are the principles of choice, autonomy and social inclusion. These principles contradict the conventional linear continuum approach. Conclusions: The linear continuum approach of psychiatric rehabilitation should be replaced by the "supported inclusion"-approach. © Georg Thieme Verlag KG Stuttgart · New York.
Yoo, Seunghwan; Song, Ho Young; Lee, Junghoon; Jang, Cheol-Yong; Jeong, Hakgeun
2012-11-20
In this article, we introduce a simple fabrication method for SiO(2)-based thin diffractive optical elements (DOEs) that uses the conventional processes widely used in the semiconductor industry. Photolithography and an inductively coupled plasma etching technique are easy and cost-effective methods for fabricating subnanometer-scale and thin DOEs with a refractive index of 1.45, based on SiO(2). After fabricating DOEs, we confirmed the shape of the output light emitted from the laser diode light source and applied to a light-emitting diode (LED) module. The results represent a new approach to mass-produce DOEs and realize a high-brightness LED module.
Exoskeleton for Soldier Enhancement Systems Feasibility Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jansen, J.F.
2000-09-28
The development of a successful exoskeleton for human performance augmentation (EHPA) will require a multi-disciplinary systems approach based upon sound biomechanics, power generation and actuation systems, controls technology, and operator interfaces. The ability to integrate key components into a system that enhances performance without impeding operator mobility is essential. The purpose of this study and report are to address the issue of feasibility of building a fieldable EHPA. Previous efforts, while demonstrating progress and enhancing knowledge, have not approached the level required for a fully functional, fieldable system. It is doubtless that the technologies required for a successful exoskeleton havemore » advanced, and some of them significantly. The question to be addressed in this report is have they advanced to the point of making a system feasible in the next three to five years? In this study, the key technologies required to successfully build an exoskeleton have been examined. The primary focus has been on the key technologies of power sources, actuators, and controls. Power sources, including internal combustion engines, fuel cells, batteries, super capacitors, and hybrid sources have been investigated and compared with respect to the exoskeleton application. Both conventional and non-conventional actuator technologies that could impact EHPA have been assessed. In addition to the current state of the art of actuators, the potential for near-term improvements using non-conventional actuators has also been addressed. Controls strategies, and their implication to the design approach, and the exoskeleton to soldier interface have also been investigated. In addition to these key subsystems and technologies, this report addresses technical concepts and issues relating to an integrated design. A recommended approach, based on the results of the study is also presented.« less
Douglas, Ivor S
2017-05-01
Diagnosis of pulmonary infection, including hospital-acquired pneumonia (HAP) and ventilator-associated pneumonia (VAP) in the critically ill patient remains a common and therapeutically challenging diagnosis with significant attributable morbidity, mortality, and cost. Current clinical approaches to surveillance, early detection and, conventional culture-based microbiology are inadequate for optimal targeted antibiotic treatment and stewardship. Efforts to enhance diagnosis of HAP and VAP and the impact of these novel approaches on rational antimicrobial selection and stewardship are the focus of recent studies reviewed here. Recent consensus guidelines for diagnosis and management of HAP and VAP are relatively silent on the potential role of novel rapid microbiological techniques and reply heavily on conventional culture strategies of noninvasively obtained (including endotracheal aspirate samples). Novel rapid microbiological diagnostics, including nucleic acid amplification, mass spectrometry, and fluorescence microscopy-based technologies are promising approaches for the future. Exhaled breath biomarkers, including measurement of VOC represent a future approach. Further validation of novel diagnostic technology platforms will be required to evaluate their utility for enhancing diagnosis and guiding treatment of pulmonary infections in the critically ill. However, the integration of novel diagnostics for rapid microbial identification, resistance phenotyping, and antibiotic sensitivity testing into usual care practice could significantly transform the care of patients and potentially inform improved targeted antimicrobial selection, de-escalation, and stewardship.
Ahn, Si-Nae; Yoo, Eun-Young; Jung, Min-Ye; Park, Hae-Yean; Lee, Ji-Yeon; Choi, Yoo-Im
2017-01-01
Cognitive Orientation to daily Occupational Performance (CO-OP) approach based on cognitive strategy in occupational therapy. To investigate the effects of CO-OP approach on occupational performance in individuals with hemiparetic stroke. This study was designed as a 5-week, randomized, single-blind. Forty-three participants who had a diagnosis of first stroke were enrolled in this study. The participants were randomly assigned to the experimental group (n = 20) or the control group (n = 23). The experimental group conducted CO-OP approach while the control group conducted conventional occupational therapy based on occupational performance components. This study measured Canadian Occupational Performance Measure (COPM) and Performance Quality Rating Scale (PQRS). Outcome measurements were performed at baseline and post-intervention. After training, the scores of COPM and PQRS in trained task were significantly higher for the score in the experimental group than the control group. In addition, the non-trained task was significantly higher for the score in the experimental group than the control group in COPM and the PQRS. This study suggests that the CO-OP approach is beneficial effects on the occupational performance to improvement in individuals with hemiparetic stroke, and have positive effects on generalization and transfer of acquired skills.
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodionov, S. N.; Martin, J. H.
1999-07-01
A novel approach to climate forecasting on an interannual time scale is described. The approach is based on concepts and techniques from artificial intelligence and expert systems. The suitability of this approach to climate diagnostics and forecasting problems and its advantages compared with conventional forecasting techniques are discussed. The article highlights some practical aspects of the development of climatic expert systems (CESs) and describes an implementation of such a system for the North Atlantic (CESNA). Particular attention is paid to the content of CESNA's knowledge base and those conditions that make climatic forecasts one to several years in advance possible. A detailed evaluation of the quality of the experimental real-time forecasts made by CESNA for the winters of 1995-1996, 1996-1997 and 1997-1998 are presented.
Parameterising User Uptake in Economic Evaluations: The role of discrete choice experiments.
Terris-Prestholt, Fern; Quaife, Matthew; Vickerman, Peter
2016-02-01
Model-based economic evaluations of new interventions have shown that user behaviour (uptake) is a critical driver of overall impact achieved. However, early economic evaluations, prior to introduction, often rely on assumed levels of uptake based on expert opinion or uptake of similar interventions. In addition to the likely uncertainty surrounding these uptake assumptions, they also do not allow for uptake to be a function of product, intervention, or user characteristics. This letter proposes using uptake projections from discrete choice experiments (DCE) to better parameterize uptake and substitution in cost-effectiveness models. A simple impact model is developed and illustrated using an example from the HIV prevention field in South Africa. Comparison between the conventional approach and the DCE-based approach shows that, in our example, DCE-based impact predictions varied by up to 50% from conventional estimates and provided far more nuanced projections. In the absence of observed uptake data and to model the effect of variations in intervention characteristics, DCE-based uptake predictions are likely to greatly improve models parameterizing uptake solely based on expert opinion. This is particularly important for global and national level decision making around introducing new and probably more expensive interventions, particularly where resources are most constrained. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Fairweather, John R.; Hunt, Lesley M.; Rosin, Chris J.; Campbell, Hugh R.
2009-01-01
Within the political economy of agriculture and agrofood literatures there are examples of approaches that reject simple dichotomies between alternatives and the mainstream. In line with such approaches, we challenge the assumption that alternative agriculture, and its attendant improved environmental practices, alternative management styles, less…
Keystroke dynamics in the pre-touchscreen era
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A.
2013-01-01
Biometric authentication seeks to measure an individual’s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals’ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts. PMID:24391568
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Keystroke dynamics in the pre-touchscreen era.
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A
2013-12-19
Biometric authentication seeks to measure an individual's unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals' typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts.
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Gu, C.; Li, J.; Toksoz, M. N.
2013-12-01
Induced seismicity occurs both in conventional oil/gas fields due to production and water injection and in unconventional oil/gas fields due to hydraulic fracturing. Source mechanisms of these induced earthquakes are of great importance for understanding their causes and the physics of the seismic processes in reservoirs. Previous research on the analysis of induced seismic events in conventional oil/gas fields assumed a double couple (DC) source mechanism. However, recent studies have shown a non-negligible percentage of a non-double-couple (non-DC) component of source moment tensor in hydraulic fracturing events (Šílený et al., 2009; Warpinski and Du, 2010; Song and Toksöz, 2011). In this study, we determine the full moment tensor of the induced seismicity data in a conventional oil/gas field and for hydrofrac events in an unconventional oil/gas field. Song and Toksöz (2011) developed a full waveform based complete moment tensor inversion method to investigate a non-DC source mechanism. We apply this approach to the induced seismicity data from a conventional gas field in Oman. In addition, this approach is also applied to hydrofrac microseismicity data monitored by downhole geophones in four wells in US. We compare the source mechanisms of induced seismicity in the two different types of gas fields and explain the differences in terms of physical processes.
NASIS data base management system - IBM 360/370 OS MVT implementation. 1: Installation standards
NASA Technical Reports Server (NTRS)
1973-01-01
The installation standards for the NASA Aerospace Safety Information System (NASIS) data base management system are presented. The standard approach to preparing systems documentation and the program design and coding rules and conventions are outlined. Included are instructions for preparing all major specifications and suggestions for improving the quality and efficiency of the programming task.
Qiu, Zeyuan
2009-11-01
A science-based geographic information system (GIS) approach is presented to target critical source areas in watersheds for conservation buffer placement. Critical source areas are the intersection of hydrologically sensitive areas and pollutant source areas in watersheds. Hydrologically sensitive areas are areas that actively generate runoff in the watershed and are derived using a modified topographic index approach based on variable source area hydrology. Pollutant source areas are the areas in watersheds that are actively and intensively used for such activities as agricultural production. The method is applied to the Neshanic River watershed in Hunterdon County, New Jersey. The capacity of the topographic index in predicting the spatial pattern of runoff generation and the runoff contribution to stream flow in the watershed is evaluated. A simple cost-effectiveness assessment is conducted to compare the conservation buffer placement scenario based on this GIS method to conventional riparian buffer scenarios for placing conservation buffers in agricultural lands in the watershed. The results show that the topographic index reasonably predicts the runoff generation in the watershed. The GIS-based conservation buffer scenario appears to be more cost-effective than the conventional riparian buffer scenarios.
Stirling, Andy
2008-04-01
This paper examines apparent tensions between "science-based," "precautionary," and "participatory" approaches to decision making on risk. Partly by reference to insights currently emerging in evolutionary studies, the present paper looks for ways to reconcile some of the contradictions. First, I argue that technological evolution is a much more plural and open-ended process than is conventionally supposed. Risk politics is thus implicitly as much about social choice of technological pathways as narrow issues of safety. Second, it is shown how conventional "science-based" risk assessment techniques address only limited aspects of incomplete knowledge in complex, dynamic, evolutionary processes. Together, these understandings open the door to more sophisticated, comprehensive, rational, and robust decision-making processes. Despite their own limitations, it is found that precautionary and participatory approaches help to address these needs. A concrete framework is outlined through which the synergies can be more effectively harnessed. By this means, we can hope simultaneously to improve scientific rigor and democratic legitimacy in risk governance.
Butler, Helen; Bowes, Glenn; Drew, Sarah; Glover, Sara; Godfrey, Celia; Patton, George; Trafford, Lea; Bond, Lyndal
2010-03-01
Schools and school systems are increasingly asked to use evidence-based strategies to promote the health and well-being of students. The dissemination of school-based health promotion research, however, offers particular challenges to conventional approaches to dissemination. Schools and education systems are multifaceted organizations that sit within constantly shifting broader contexts. This article argues that health promotion dissemination needs to be rethought for school communities as complex systems and that this requires understanding and harnessing the dynamic ecology of the sociopolitical context. In developing this argument, the authors draw on their experience of the dissemination process of a multilevel school-based intervention in a complex educational context. Building on this experience, they argue for the need to move beyond conventional dissemination strategies to a focus on active partnerships between developers and users of school-based intervention research and offer a conceptual tool for planning dissemination.
Simulating effects of microtopography on wetland specific yield and hydroperiod
Summer, David M.; Wang, Xixi
2011-01-01
Specific yield and hydroperiod have proven to be useful parameters in hydrologic analysis of wetlands. Specific yield is a critical parameter to quantitatively relate hydrologic fluxes (e.g., rainfall, evapotranspiration, and runoff) and water level changes. Hydroperiod measures the temporal variability and frequency of land-surface inundation. Conventionally, hydrologic analyses used these concepts without considering the effects of land surface microtopography and assumed a smoothly-varying land surface. However, these microtopographic effects could result in small-scale variations in land surface inundation and water depth above or below the land surface, which in turn affect ecologic and hydrologic processes of wetlands. The objective of this chapter is to develop a physically-based approach for estimating specific yield and hydroperiod that enables the consideration of microtopographic features of wetlands, and to illustrate the approach at sites in the Florida Everglades. The results indicate that the physically-based approach can better capture the variations of specific yield with water level, in particular when the water level falls between the minimum and maximum land surface elevations. The suggested approach for hydroperiod computation predicted that the wetlands might be completely dry or completely wet much less frequently than suggested by the conventional approach neglecting microtopography. One reasonable generalization may be that the hydroperiod approaches presented in this chapter can be a more accurate prediction tool for water resources management to meet the specific hydroperiod threshold as required by a species of plant or animal of interest.
Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Reh, Douglas D; Gallia, Gary L; Siewerdsen, Jeffrey H
2012-09-01
A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to "tissue" in the moving image and "air" in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed "extra-dimensional" Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate "ejection" of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.
Extra-dimensional Demons: A method for incorporating missing tissue in deformable image registration
Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Reh, Douglas D.; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. Methods: A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to “tissue” in the moving image and “air” in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed “extra-dimensional” Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Results: Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate “ejection” of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Conclusions: Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate “ejection” of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance. PMID:22957637
3D printing technology used in severe hip deformity.
Wang, Shanshan; Wang, Li; Liu, Yan; Ren, Yongfang; Jiang, Li; Li, Yan; Zhou, Hao; Chen, Jie; Jia, Wenxiao; Li, Hui
2017-09-01
This study was designed to assess the use of a 3D printing technique in total hip arthroplasty (THA) for severe hip deformities, where new and improved approaches are needed. THAs were performed from January 2015 to December 2016. Bioprosthesis artificial hip joints were used in both conventional and 3D printing hip arthroplasties. A total of 74 patients (57 cases undergoing conventional hip replacements and 17 undergoing 3D printing hip replacements) were followed-up for an average of 24 months. The average age of the patients was 62.7 years. Clinical data between the patients treated with different approaches were compared. Results showed that the time to postoperative weight bearing and the Harris scores of the patients in the 3D printing group were better than those for patients in the conventional hip replacement group. Unfortunately, the postoperative infection and loosening rates were higher in the 3D printing group. However, there were no significant differences in femoral neck anteversion, neck shaft, acetabular or sharp angles between ipsilateral and contralateral sides in the 3D printing group (P>0.05). The femoral neck anteversion angle was significantly different between the two sides in the conventional hip replacement group (P<0.05). Based on these results, we suggest that the 3D printing approach provides a better short-term curative effect that is more consistent with the physiological structure and anatomical characteristics of the patient, and we anticipate that its use will help improve the lives of many patients.
NASA Astrophysics Data System (ADS)
Arumugam, S.; Ramakrishna, P.; Sangavi, S.
2018-02-01
Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.
Conventionalism, structuralism and neo-Kantianism in Poincaré's philosophy of science
NASA Astrophysics Data System (ADS)
Ivanova, Milena
2015-11-01
Poincaré is well known for his conventionalism and structuralism. However, the relationship between these two theses and their place in Poincaré's epistemology of science remain puzzling. In this paper I show the scope of Poincaré's conventionalism and its position in Poincaré's hierarchical approach to scientific theories. I argue that for Poincaré scientific knowledge is relational and made possible by synthetic a priori, empirical and conventional elements, which, however, are not chosen arbitrarily. By examining his geometric conventionalism, his hierarchical account of science and defence of continuity in theory change, I argue that Poincaré defends a complex structuralist position based on synthetic a priori and conventional elements, the mind-dependence of which precludes epistemic access to mind-independent structures. The object of mathematical theories is not to reveal to us the real nature of things; that would be an unreasonable claim. Their only object is to coordinate the physical laws with which physical experiments make us acquainted, the enunciation of which, without the aid of mathematics, would be unable to effect. (Poincaré, 2001, 117)
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
Kwon, Yeondae; Natori, Yukikazu
2017-01-01
The proportion of the elderly population in most countries worldwide is increasing dramatically. Therefore, social interest in the fields of health, longevity, and anti-aging has been increasing as well. However, the basic research results obtained from a reductionist approach in biology and a bioinformatic approach in genome science have limited usefulness for generating insights on future health, longevity, and anti-aging-related research on a case by case basis. We propose a new approach that uses our literature mining technique and bioinformatics, which lead to a better perspective on research trends by providing an expanded knowledge base to work from. We demonstrate that our approach provides useful information that deepens insights on future trends which differs from data obtained conventionally, and this methodology is already paving the way for a new field in aging-related research based on literature mining. One compelling example of this is how our new approach can be a useful tool in drug repositioning. PMID:28817730
Formal hardware verification of digital circuits
NASA Technical Reports Server (NTRS)
Joyce, J.; Seger, C.-J.
1991-01-01
The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.
DETECTION OF BIFIDOBACTERIA IN DIFFERENT ANIMAL FECES USING SPECIES-SPECIFIC 165 RDNA ASSAYS
The use of bifidobacteria in microbial water quality studies has been sporadic, in part due to the limitations associated with conventional approaches used to isolate and enumerate these bacteria. Recently, a number of culture independent, PCR-based assays have been suggested for...
Chen, Tzu-Ping; Yen-Chu; Wu, Yi-Cheng; Yeh, Chi-Ju; Liu, Chien-Ying; Hsieh, Ming-Ju; Yuan, Hsu-Chia; Ko, Po-Jen; Liu, Yun-Hen
2015-12-01
Transumbilical single-port surgery has been associated with less postoperative pain and offers better cosmetic outcomes than conventional 3-port laparoscopic surgery. This study compares the safety and efficacy of transumbilical thoracoscopy and conventional thoracoscopy for lung wedge resection. The animals (n = 16) were randomly assigned to the transumbilical thoracoscopic approach group (n = 8) or conventional thoracoscopic approach group (n = 8). Transumbilical lung resection was performed via an umbilical incision and a diaphragmatic incision. In the conventional thoracoscopic group, lung resection was completed through a thoracic incision. For both procedures, we compared the surgical outcomes, for example, operating time and operative complications; physiologic parameters, for example, respiratory rate and body temperature; inflammatory parameters, for example, white blood cell count; and pulmonary parameters, for example, arterial blood gas levels. The animals were euthanized 2 weeks after the surgery for gross and histologic evaluations. The lung wedge resection was successfully performed in all animals. There was no significant difference in the mean operating times or complications between the transumbilical and the conventional thoracoscopic approach groups. With regard to the physiologic impact of the surgeries, the transumbilical approach was associated with significant elevations in body temperature on postoperative day 1, when compared with the standard thoracoscopic approach. This study suggests that both approaches for performing lung wedge resection were comparable in efficacy and postoperative complications. © The Author(s) 2014.
Effects of feedstock characteristics on microwave-assisted pyrolysis - A review.
Zhang, Yaning; Chen, Paul; Liu, Shiyu; Peng, Peng; Min, Min; Cheng, Yanling; Anderson, Erik; Zhou, Nan; Fan, Liangliang; Liu, Chenghui; Chen, Guo; Liu, Yuhuan; Lei, Hanwu; Li, Bingxi; Ruan, Roger
2017-04-01
Microwave-assisted pyrolysis is an important approach to obtain bio-oil from biomass. Similar to conventional electrical heating pyrolysis, microwave-assisted pyrolysis is significantly affected by feedstock characteristics. However, microwave heating has its unique features which strongly depend on the physical and chemical properties of biomass feedstock. In this review, the relationships among heating, bio-oil yield, and feedstock particle size, moisture content, inorganics, and organics in microwave-assisted pyrolysis are discussed and compared with those in conventional electrical heating pyrolysis. The quantitative analysis of data reported in the literature showed a strong contrast between the conventional processes and microwave based processes. Microwave-assisted pyrolysis is a relatively new process with limited research compared with conventional electrical heating pyrolysis. The lack of understanding of some observed results warrant more and in-depth fundamental research. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yong; Department of Materials Science and Engineering, University of Tennessee, Knoxville, TN 37996; Liu Fengxiao
Cemented carbides with a functionally graded structure have significantly improved mechanical properties and lifetimes in cutting, drilling and molding. In this work, WC-6 wt.% Co cemented carbides with three-layer graded structure (surface layer rich in WC, mid layer rich in Co and the inner part of the average composition) were prepared by carburizing pre-sintered {eta}-phase-containing cemented carbides. The three-point bending fatigue tests based on the total-life approach were conducted on both WC-6wt%Co functionally graded cemented carbides (FGCC) and conventional WC-6wt%Co cemented carbides. The functionally graded cemented carbide shows a slightly higher fatigue limit ({approx}100 MPa) than the conventional ones undermore » the present testing conditions. However, the fatigue crack nucleation behavior of FGCC is different from that of the conventional ones. The crack nucleates preferentially along the Co-gradient and perpendicular to the tension surface in FGCC, while parallel to the tension surface in conventional cemented carbides.« less
Stringent homology-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions
2014-01-01
Background H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. Results We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both host and pathogen proteins involved in host-pathogen PPIs tend to have lower charge, and tend to be more hydrophilic. Conclusions Our stringent homology-based prediction approach provides a better strategy in predicting PPIs between eukaryotic hosts and prokaryotic pathogens than a conventional homology-based approach. The properties we have observed from the predicted H. sapiens-M. tuberculosis H37Rv PPI network are useful for understanding inter-species host-pathogen PPI networks and provide novel insights for host-pathogen interaction studies. Reviewers This article was reviewed by Michael Gromiha, Narayanaswamy Srinivasan and Thomas Dandekar. PMID:24708540
A parallel algorithm for generation and assembly of finite element stiffness and mass matrices
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Carmona, E. A.; Nguyen, D. T.; Baddourah, M. A.
1991-01-01
A new algorithm is proposed for parallel generation and assembly of the finite element stiffness and mass matrices. The proposed assembly algorithm is based on a node-by-node approach rather than the more conventional element-by-element approach. The new algorithm's generality and computation speed-up when using multiple processors are demonstrated for several practical applications on multi-processor Cray Y-MP and Cray 2 supercomputers.
Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.
Fang, Cheng; Xiao, Zhiyan
2016-01-01
Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
NASA Astrophysics Data System (ADS)
Debenjak, Andrej; Boškoski, Pavle; Musizza, Bojan; Petrovčič, Janko; Juričić, Đani
2014-05-01
This paper proposes an approach to the estimation of PEM fuel cell impedance by utilizing pseudo-random binary sequence as a perturbation signal and continuous wavelet transform with Morlet mother wavelet. With the approach, the impedance characteristic in the frequency band from 0.1 Hz to 500 Hz is identified in 60 seconds, approximately five times faster compared to the conventional single-sine approach. The proposed approach was experimentally evaluated on a single PEM fuel cell of a larger fuel cell stack. The quality of the results remains at the same level compared to the single-sine approach.
Hot Carrier-Based Near-Field Thermophotovoltaic Energy Conversion.
St-Gelais, Raphael; Bhatt, Gaurang Ravindra; Zhu, Linxiao; Fan, Shanhui; Lipson, Michal
2017-03-28
Near-field thermophotovoltaics (NFTPV) is a promising approach for direct conversion of heat to electrical power. This technology relies on the drastic enhancement of radiative heat transfer (compared to conventional blackbody radiation) that occurs when objects at different temperatures are brought to deep subwavelength distances (typically <100 nm) from each other. Achieving such radiative heat transfer between a hot object and a photovoltaic (PV) cell could allow direct conversion of heat to electricity with a greater efficiency than using current solid-state technologies (e.g., thermoelectric generators). One of the main challenges in the development of this technology, however, is its incompatibility with conventional silicon PV cells. Thermal radiation is weak at frequencies larger than the ∼1.1 eV bandgap of silicon, such that PV cells with lower excitation energies (typically 0.4-0.6 eV) are required for NFTPV. Using low bandgap III-V semiconductors to circumvent this limitation, as proposed in most theoretical works, is challenging and therefore has never been achieved experimentally. In this work, we show that hot carrier PV cells based on Schottky junctions between silicon and metallic films could provide an attractive solution for achieving high efficiency NFTPV electricity generation. Hot carrier science is currently an important field of research and several approaches are investigated for increasing the quantum efficiency (QE) of hot carrier generation beyond conventional Fowler model predictions. If the Fowler limit can indeed be overcome, we show that hot carrier-based NFTPV systems-after optimization of their thermal radiation spectrum-could allow electricity generation with up to 10-30% conversion efficiencies and 10-500 W/cm 2 generated power densities (at 900-1500 K temperatures). We also discuss how the unique properties of thermal radiation in the extreme near-field are especially well suited for investigating recently proposed approaches for high QE hot carrier junctions. We therefore expect our work to be of interest for the field of hot carrier science and-by relying solely on conventional thin film materials-to provide a path for the experimental demonstration of NFTPV energy conversion.
Intuitive color-based visualization of multimedia content as large graphs
NASA Astrophysics Data System (ADS)
Delest, Maylis; Don, Anthony; Benois-Pineau, Jenny
2004-06-01
Data visualization techniques are penetrating in various technological areas. In the field of multimedia such as information search and retrieval in multimedia archives, or digital media production and post-production, data visualization methodologies based on large graphs give an exciting alternative to conventional storyboard visualization. In this paper we develop a new approach to visualization of multimedia (video) documents based both on large graph clustering and preliminary video segmenting and indexing.
Model based control of dynamic atomic force microscope.
Lee, Chibum; Salapaka, Srinivasa M
2015-04-01
A model-based robust control approach is proposed that significantly improves imaging bandwidth for the dynamic mode atomic force microscopy. A model for cantilever oscillation amplitude and phase dynamics is derived and used for the control design. In particular, the control design is based on a linearized model and robust H(∞) control theory. This design yields a significant improvement when compared to the conventional proportional-integral designs and verified by experiments.
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
A thermodynamic approach to the 'mitosis/apoptosis' ratio in cancer
NASA Astrophysics Data System (ADS)
Lucia, Umberto; Ponzetto, Antonio; Deisboeck, Thomas S.
2015-10-01
Cancer can be considered as an open, complex, (bio-thermo)dynamic and self-organizing system. Consequently, an entropy generation approach has been employed to analyze its mitosis/apoptosis ratio. Specifically, a novel thermodynamic anticancer strategy is suggested, based on the variation of entropy generation caused by the application of external fields, for example electro-magnetic fields, for therapeutic purposes. Eventually, this innovative approach could support conventional therapies, particularly for inoperable tumors or advanced stages of cancer, when larger tumor burden is diagnosed, and therapeutic options are often limited.
Kienle, Gunver S; Mussler, Milena; Fuchs, Dieter; Kiene, Helmut
2016-12-01
Background Cancer patients widely seek integrative oncology which embraces a wide variety of treatments and system approaches. Objective To investigate the concepts, therapeutic goals, procedures, and working conditions of integrative oncology doctors in the field of anthroposophic medicine. Methods This qualitative study was based on in-depth interviews with 35 highly experienced doctors working in hospitals and office-based practices in Germany and other countries. Structured qualitative content analysis was applied to examine the data. Results The doctors integrated conventional and holistic cancer concepts. Their treatments aimed at both tumor and symptom control and at strengthening the patient on different levels: living with the disease, overcoming the disease, enabling emotional and cognitive development, and addressing spiritual or transcendental issues according to the patient's wishes and initiatives. Therapeutic procedures were conventional anticancer and symptom-relieving treatments, herbal and mineral remedies, mistletoe therapy, art therapies, massages and other external applications, nutrition and lifestyle advice, psychological support, and multiple forms of empowerment. The approach emphasised good patient-doctor relationships and sufficient time for patient encounters and decision-making. Individualization appeared in several dimensions and was interwoven with standards and mindlines. The doctors often worked in teams and cooperated with other cancer care-related specialists. Conclusion Integrative cancer care pursues an individualized and patient-centered approach, encompassing conventional and multimodal complementary interventions, and addressing, along with physical and functional needs, the emotional and spiritual needs of patients. This seems to be important for tumor and symptom control, and addresses major challenges and important goals of modern cancer care. © The Author(s) 2016.
Individualized Integrative Cancer Care in Anthroposophic Medicine
Kienle, Gunver S.; Mussler, Milena; Fuchs, Dieter; Kiene, Helmut
2016-01-01
Background. Cancer patients widely seek integrative oncology which embraces a wide variety of treatments and system approaches. Objective. To investigate the concepts, therapeutic goals, procedures, and working conditions of integrative oncology doctors in the field of anthroposophic medicine. Methods. This qualitative study was based on in-depth interviews with 35 highly experienced doctors working in hospitals and office-based practices in Germany and other countries. Structured qualitative content analysis was applied to examine the data. Results. The doctors integrated conventional and holistic cancer concepts. Their treatments aimed at both tumor and symptom control and at strengthening the patient on different levels: living with the disease, overcoming the disease, enabling emotional and cognitive development, and addressing spiritual or transcendental issues according to the patient’s wishes and initiatives. Therapeutic procedures were conventional anticancer and symptom-relieving treatments, herbal and mineral remedies, mistletoe therapy, art therapies, massages and other external applications, nutrition and lifestyle advice, psychological support, and multiple forms of empowerment. The approach emphasised good patient-doctor relationships and sufficient time for patient encounters and decision-making. Individualization appeared in several dimensions and was interwoven with standards and mindlines. The doctors often worked in teams and cooperated with other cancer care–related specialists. Conclusion. Integrative cancer care pursues an individualized and patient-centered approach, encompassing conventional and multimodal complementary interventions, and addressing, along with physical and functional needs, the emotional and spiritual needs of patients. This seems to be important for tumor and symptom control, and addresses major challenges and important goals of modern cancer care. PMID:27151589
Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.
Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D
2018-01-01
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
A hybrid algorithm for clustering of time series data based on affinity search technique.
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.
A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966
The value of innovation under value-based pricing.
Moreno, Santiago G; Ray, Joshua A
2016-01-01
The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the 'value of innovation' reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing innovation, while supporting health-care systems to pursue dynamic allocative efficiency. However, the long-term sustainability of health-care systems must be assessed before this proposal is adopted by policy makers.
The value of innovation under value-based pricing
Moreno, Santiago G.; Ray, Joshua A.
2016-01-01
Objective The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the ‘value of innovation’ reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. Method The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). Results The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. Conclusion We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing innovation, while supporting health-care systems to pursue dynamic allocative efficiency. However, the long-term sustainability of health-care systems must be assessed before this proposal is adopted by policy makers. PMID:27123192
Flexible inorganic light emitting diodes based on semiconductor nanowires
Guan, Nan; Dai, Xing; Babichev, Andrey V.; Julien, François H.
2017-01-01
The fabrication technologies and the performance of flexible nanowire light emitting diodes (LEDs) are reviewed. We first introduce the existing approaches for flexible LED fabrication, which are dominated by organic technologies, and we briefly discuss the increasing research effort on flexible inorganic LEDs achieved by micro-structuring and transfer of conventional thin films. Then, flexible nanowire-based LEDs are presented and two main fabrication technologies are discussed: direct growth on a flexible substrate and nanowire membrane formation and transfer. The performance of blue, green, white and bi-color flexible LEDs fabricated following the transfer approach is discussed in more detail. PMID:29568439
Small molecule-induced cellular fate reprogramming: promising road leading to Rome.
Li, Xiang; Xu, Jun; Deng, Hongkui
2018-05-29
Cellular fate reprogramming holds great promise to generate functional cell types for replenishing new cells and restoring functional loss. Inspired by transcription factor-induced reprogramming, the field of cellular reprogramming has greatly advanced and developed into divergent streams of reprogramming approaches. Remarkably, increasing studies have shown the power and advantages of small molecule-based approaches for cellular fate reprogramming, which could overcome the limitations of conventional transgenic-based reprogramming. In this concise review, we discuss these findings and highlight the future potentiality with particular focus on this new trend of chemical reprogramming. Copyright © 2018 Elsevier Ltd. All rights reserved.
Conventional nuclear strategy and the American doctrine of counterforce
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, C.P.
Debate over nuclear weapons still lingers and one cause of this trend, as suggested by this thesis, is the rise of conventional nuclear strategy or, in other words, the attempt by the US government to apply through the counterforce doctrine a conventional weapons strategy in an age of nuclear weapons. That debate is analyzed, as well as the thinking underlining conventional nuclear strategy, and explains why conventionalization has become popular in US nuclear weapons policies. A feature of the American nuclear debate has been the unresolved tension between two approaches to nuclear strategy, namely: the apocalyptic approach and the conventionalmore » approach. The confrontation between these camps has resulted over the years in a gradual but steady erosion of the strategic consensus to the point where, under the Reagan administration, the conventional camp appears to have emerged as a clear winner from the nuclear debate. The attractiveness of conventional nuclear strategy can be attributed to the influence and working of an American style of nuclear strategy, i.e., a specific approach to the phenomena of nuclear weapons. The author concludes that the conventional and official strategic view that nuclear problems can be solved by technological progress may, in fact, contribute to worsen rather than improve the thermonuclear condition of the world.« less
Kato, Takehito; Oinuma, Chihiro; Otsuka, Munechika; Hagiwara, Naoki
2017-01-10
The photoactive layer of a typical organic thin-film bulk-heterojunction (BHJ) solar cell commonly uses fullerene derivatives as the electron-accepting material. However, fullerene derivatives are air-sensitive; therefore, air-stable material is needed as an alternative. In the present study, we propose and describe the properties of Ti-alkoxide as an alternative electron-accepting material to fullerene derivatives to create highly air-stable BHJ solar cells. It is well-known that controlling the morphology in the photoactive layer, which is constructed with fullerene derivatives as the electron acceptor, is important for obtaining a high overall efficiency through the solvent method. The conventional solvent method is useful for high-solubility materials, such as fullerene derivatives. However, for Ti-alkoxides, the conventional solvent method is insufficient, because they only dissolve in specific solvents. Here, we demonstrate a new approach to morphology control that uses the molecular bulkiness of Ti-alkoxides without the conventional solvent method. That is, this method is one approach to obtain highly efficient, air-stable, organic-inorganic bulk-heterojunction solar cells.
Within the 3 -year effort, we have established several major findings:
The Relative Merits of PBL (Problem-Based Learning) in University Education
ERIC Educational Resources Information Center
Benson, Steve
2012-01-01
In Australia, academic workloads are increasing, and university funding is decreasing. Academics and university managers are engaging in risk adverse behavior and tending to focus on customer satisfaction and student retention, potentially at the expense of academic standards. Conventional approaches to pedagogy minimize adverse student feedback,…
Appendix B: Rapid development approaches for system engineering and design
NASA Technical Reports Server (NTRS)
1993-01-01
Conventional processes often produce systems which are obsolete before they are fielded. This paper explores some of the reasons for this, and provides a vision of how we can do better. This vision is based on our explorations in improved processes and system/software engineering tools.
The German Passive: Analysis and Teaching Technique.
ERIC Educational Resources Information Center
Griffen, T. D.
1981-01-01
Proposes an analysis of German passive based upon internal structure rather than translation conventions from Latin and Greek. Claims that this approach leads to a description of the perfect participle as an adjectival complement, which eliminates the classification of a passive voice for German and simplifies the learning task. (MES)
Novel Features for Brain-Computer Interfaces
Woon, W. L.; Cichocki, A.
2007-01-01
While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991
Language Loss and the Crisis of Cognition: Between Socio- and Psycholinguistics.
ERIC Educational Resources Information Center
Kenny, K. Dallas
A non-structural model is proposed for quantifying and analyzing the dynamics of language attrition, particularly among immigrants in a second language environment, based on examination of disfluencies (hesitations, errors, and repairs). The first chapter discusses limitations of the conventional synchronic textual approach to analyzing language…
Limiting the Limits on Domains: A Commentary on Fowler and Heteronomy.
ERIC Educational Resources Information Center
Turiel, Elliot; Smetana, Judith G.
1998-01-01
Defends domain theory approach to children's moral development based on limitations of Piaget's original theory. Argues that Fowler's characterization of domain theory research omits important features and studies. Maintains that distinctions between morality and convention cannot be reduced to differences in perceptible harm and punishment; it is…
Cognitive issues in head-up displays
NASA Technical Reports Server (NTRS)
Fischer, E.; Haines, R. F.
1980-01-01
The ability of pilots to recognize and act upon unexpected information, presented in either the outside world or in a head-up display (HUD), was evaluated. Eight commercial airline pilots flew 18 approaches with a flightpath-type HUD and 13 approaches with conventional instruments in a fixed-base 727 simulator. The approaches were flown under conditions of low visibility, turbulence, and wind shear. Vertical and lateral flight performance was measured for five cognitive variables: an unexpected obstacle on runway; vertical and lateral boresight-type offset of the HUD; lateral ILS beam bend-type offset; and no anomaly. Mean response time to the runway obstacle was longer with HUD than without it (4.13 vs 1.75 sec.), and two of the pilots did not see the obstacle at all with the HUD. None of the offsets caused any deterioration in lateral flight performance, but all caused some change in vertical tracking; all offsets seemed to magnify the environmental effects. In all conditions, both vertical and lateral tracking was better with the HUD than with the conventional instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only themore » minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.« less
Roethlisberger, Dieter; Mahler, Hanns-Christian; Altenburger, Ulrike; Pappenberger, Astrid
2017-02-01
Parenteral products should aim toward being isotonic and euhydric (physiological pH). Yet, due to other considerations, this goal is often not reasonable or doable. There are no clear allowable ranges related to pH and osmolality, and thus, the objective of this review was to provide a better understanding of acceptable formulation pH, buffer strength, and osmolality taking into account the administration route (i.e., intramuscular, intravenous, subcutaneous) and administration technique (i.e., bolus, push, infusion). This evaluation was based on 3 different approaches: conventional, experimental, and parametric. The conventional way of defining formulation limits was based on standard pH and osmolality ranges. Experimental determination of titratable acidity or in vitro hemolysis testing provided additional drug product information. Finally, the parametric approach was based on the calculation of theoretical values such as (1) the maximal volume of injection which cannot shift the blood's pH or its molarity out of the physiological range and (b) a dilution ratio at the injection site and by verifying that threshold values are not exceeded. The combination of all 3 approaches can support the definition of acceptable pH, buffer strength, and osmolality of formulations and thus may reduce the risk of failure during preclinical and clinical development. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme
NASA Astrophysics Data System (ADS)
Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi
We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).
Preeti, Bajaj; Ashish, Ahuja; Shriram, Gosavi
2013-12-01
As the "Science of Medicine" is getting advanced day-by-day, need for better pedagogies & learning techniques are imperative. Problem Based Learning (PBL) is an effective way of delivering medical education in a coherent, integrated & focused manner. It has several advantages over conventional and age-old teaching methods of routine. It is based on principles of adult learning theory, including student's motivation, encouragement to set goals, think critically about decision making in day-to-day operations. Above all these, it stimulates challenge acceptance and learning curiosity among students and creates pragmatic educational program. To measure the effectiveness of the "Problem Based Learning" as compared to conventional theory/didactic lectures based learning. The study was conducted on 72 medical students from Dayanand Medical College & Hospital, Ludhiana. Two modules of problem based sessions designed and delivered. Pre & Post-test score's scientific statistical analysis was done. Student feed-back received based on questionnaire in the five-point Likert scale format. Significant improvement in overall performance observed. Feedback revealed majority agreement that "Problem-based learning" helped them create interest (88.8 %), better understanding (86%) & promotes self-directed subject learning (91.6 %). Substantial improvement in the post-test scores clearly reveals acceptance of PBL over conventional learning. PBL ensures better practical learning, ability to create interest, subject understanding. It is a modern-day educational strategy, an effective tool to objectively improve the knowledge acquisition in Medical Teaching.
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
NASA Astrophysics Data System (ADS)
Wang, Chen; Zhang, Qichang; Wang, Wei
2017-07-01
This work presents models and experiments of an impact-driven and frequency up-converted wideband piezoelectric-based vibration energy harvester with a quintuple-well potential induced by the combination effect of magnetic nonlinearity and mechanical piecewise-linearity. Analysis shows that the interwell motions during coupled vibration period enable to increase electrical power output in comparison to conventional frequency up-conversion technology. Besides, the quintuple-well potential with shallower potential wells could extend the harvester's operating bandwidth to lower frequencies. Experiments demonstrate our proposed approach can dramatically boost the measured power of the energy harvester as much as 35 times while its lower cut-off frequency is two times lower than that of a conventional counterpart. These results reveal our proposed approach shows promise for powering portable wireless smart devices from low-intensity, low-frequency vibration sources.
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
McConnachie, Matthew M; Romero, Claudia; Ferraro, Paul J; van Wilgen, Brian W
2016-04-01
The fundamental challenge of evaluating the impact of conservation interventions is that researchers must estimate the difference between the outcome after an intervention occurred and what the outcome would have been without it (counterfactual). Because the counterfactual is unobservable, researchers must make an untestable assumption that some units (e.g., organisms or sites) that were not exposed to the intervention can be used as a surrogate for the counterfactual (control). The conventional approach is to make a point estimate (i.e., single number along with a confidence interval) of impact, using, for example, regression. Point estimates provide powerful conclusions, but in nonexperimental contexts they depend on strong assumptions about the counterfactual that often lack transparency and credibility. An alternative approach, called partial identification (PI), is to first estimate what the counterfactual bounds would be if the weakest possible assumptions were made. Then, one narrows the bounds by using stronger but credible assumptions based on an understanding of why units were selected for the intervention and how they might respond to it. We applied this approach and compared it with conventional approaches by estimating the impact of a conservation program that removed invasive trees in part of the Cape Floristic Region. Even when we used our largest PI impact estimate, the program's control costs were 1.4 times higher than previously estimated. PI holds promise for applications in conservation science because it encourages researchers to better understand and account for treatment selection biases; can offer insights into the plausibility of conventional point-estimate approaches; could reduce the problem of advocacy in science; might be easier for stakeholders to agree on a bounded estimate than a point estimate where impacts are contentious; and requires only basic arithmetic skills. © 2015 Society for Conservation Biology.
Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey
2012-01-01
Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178
NASA Technical Reports Server (NTRS)
Stein, M.; Housner, J. D.
1978-01-01
A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heyman, Heino M.; Zhang, Xing; Tang, Keqi
2016-02-16
Metabolomics is the quantitative analysis of all metabolites in a given sample. Due to the chemical complexity of the metabolome, optimal separations are required for comprehensive identification and quantification of sample constituents. This chapter provides an overview of both conventional and advanced separations methods in practice for reducing the complexity of metabolite extracts delivered to the mass spectrometer detector, and covers gas chromatography (GC), liquid chromatography (LC), capillary electrophoresis (CE), supercritical fluid chromatography (SFC) and ion mobility spectrometry (IMS) separation techniques coupled with mass spectrometry (MS) as both uni-dimensional and as multi-dimensional approaches.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
Bennet, Jaison; Ganaprakasam, Chilambuchelvan Arul; Arputharaj, Kannan
2014-01-01
Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN), naive Bayes, and support vector machine (SVM). Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT) and moving window technique (MWT) is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.
Quasiparticle properties of DNA bases from GW calculations in a Wannier basis
NASA Astrophysics Data System (ADS)
Qian, Xiaofeng; Marzari, Nicola; Umari, Paolo
2009-03-01
The quasiparticle GW-Wannier (GWW) approach [1] has been recently developed to overcome the size limitations of conventional planewave GW calculations. By taking advantage of the localization properties of the maximally-localized Wannier functions and choosing a small set of polarization basis we reduce the number of Bloch wavefunctions products required for the evaluation of dynamical polarizabilities, and in turn greatly reduce memory requirements and computational efficiency. We apply GWW to study quasiparticle properties of different DNA bases and base-pairs, and solvation effects on the energy gap, demonstrating in the process the key advantages of this approach. [1] P. Umari,G. Stenuit, and S. Baroni, cond-mat/0811.1453
NASA Astrophysics Data System (ADS)
Murtazina, M. Sh; Avdeenko, T. V.
2018-05-01
The state of art and the progress in application of semantic technologies in the field of scientific and research activity have been analyzed. Even elementary empirical comparison has shown that the semantic search engines are superior in all respects to conventional search technologies. However, semantic information technologies are insufficiently used in the field of scientific and research activity in Russia. In present paper an approach to construction of ontological model of knowledge base is proposed. The ontological model is based on the upper-level ontology and the RDF mechanism for linking several domain ontologies. The ontological model is implemented in the Protégé environment.
Han, Sehee; Lee, Jonathan; Park, Kyung-Gook
2017-07-01
The purpose of this study was to examine the association between extracurricular activities (EA) participation and youth delinquency while tackling an endogeneity problem of EA participation. Using survey data of 12th graders in South Korea (n = 1943), this study employed an instrumental variables approach to address the self-selection problem of EA participation as the data for this study was based on an observational study design. We found a positive association between EA participation and youth delinquency based on conventional regression analysis. By contrast, we found a negative association between EA participation and youth delinquency based on an instrumental variables approach. These results indicate that caution should be exercised when we interpret the effect of EA participation on youth delinquency based on observational study designs. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Haldankar, Hrishikesh; Hoffmeister, Jeffrey W.; Periaswamy, Senthil
2016-03-01
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 +/- 0:040 to 0:893 +/- 0:033 for suspicious ROIs; and from 0:852 +/- 0:065 to 0:930 +/- 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
CNN: a speaker recognition system using a cascaded neural network.
Zaki, M; Ghalwash, A; Elkouny, A A
1996-05-01
The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.
Barros, Raquel R M; Novaes, Arthur B; Grisi, Márcio F M; Souza, Sérgio L S; Taba, Mário; Palioto, Daniela B
2005-01-01
Acellular dermal matrix graft (ADMG) has been used as an advantageous substitute for autogenous subepithelial connective tissue graft (SCTG). However, the surgical techniques used were primarily developed for the SCTG, and they may not be adequate for ADMG since it has a different healing process than SCTG owing to its different vascular and cellular structures. This study compared the 1-year clinical outcome of a new surgical approach with the outcome of a conventional procedure for the treatment of localized gingival recessions, both performed using the ADMG. The clinical parameters-probing depth, relative clinical attachment level, gingival recession (GR), and width of keratinized tissue-of 32 bilateral Miller Class I or II gingival recessions were assessed at baseline and 12 months postoperatively. Significant clinical changes for both surgical techniques were achieved after this period, including GR reduction from 3.4 mm presurgery to 1.2 mm at 1 year for the conventional technique and from 3.9 mm presurgery to 0.7 mm at 1 year for the new technique. The percentage of root coverage was 62.3% and 82.5% for the conventional and new techniques, respectively. Comparisons between the groups after this period by Mann-Whitney rank sum test revealed statistically significant greater reduction of GR favoring the new procedure (p = .000). Based on the results of this study, it can be concluded that a new surgical technique using an ADMG is more suitable for root coverage when compared with the conventional technique. The results revealed a statistically significant improvement in clinical performance with the ADMG approach.
The dependence of Islamic and conventional stocks: A copula approach
NASA Astrophysics Data System (ADS)
Razak, Ruzanna Ab; Ismail, Noriszura
2015-09-01
Recent studies have found that Islamic stocks are dependent on conventional stocks and they appear to be more risky. In Asia, particularly in Islamic countries, research on dependence involving Islamic and non-Islamic stock markets is limited. The objective of this study is to investigate the dependence between financial times stock exchange Hijrah Shariah index and conventional stocks (EMAS and KLCI indices). Using the copula approach and a time series model for each marginal distribution function, the copula parameters were estimated. The Elliptical copula was selected to present the dependence structure of each pairing of the Islamic stock and conventional stock. Specifically, the Islamic versus conventional stocks (Shariah-EMAS and Shariah-KLCI) had lower dependence compared to conventional versus conventional stocks (EMAS-KLCI). These findings suggest that the occurrence of shocks in a conventional stock will not have strong impact on the Islamic stock.
Nutraceuticals in hypercholesterolaemia: an overview
Santini, Antonello
2016-01-01
Growing attention is now being given to the possible preventive/alternative ways to avoid illness onset. Changes in lifestyle and food habits are taking over from the conventional pharmaceutical‐based approach, especially for chronic pathologies. Nutraceuticals have been proposed as key tools for the prevention and cure of some pathological conditions. This is leading research to develop new formulations based on these pharma‐foods addressed in a specific way to prevent and cure health issues, which, in turn, will have an effect on therapy‐related costs sustained by any National Health Organization. According to existing regulations, nutraceuticals cannot be categorized as either food or drugs but, by definition, often inhabit a grey area in between the two, being assimilated into food supplements, notwithstanding the beneficial properties that they can provide for some pathological conditions. A nutraceuticals‐based approach for health management, in particular for some pathological conditions, has resulted in a worldwide growing ‘nutraceutical’ revolution. An outstanding example is the approach to the ‘metabolic syndrome’, which includes overweight, obesity and cardiovascular‐related diseases, causing a sort of cascade of chronic health conditions, which is becoming a norm in modern life. Hypercholesterolaemia is one of these. It represents an example of a pathology that can be linked to both a poor lifestyle and dietary habits. The nutraceutical approach to hypercholesterolaemia is described in the present review as a possible alternative to the conventional drug‐based therapy. Linked Articles This article is part of a themed section on Principles of Pharmacological Research of Nutraceuticals. To view the other articles in this section visit http://onlinelibrary.wiley.com/doi/10.1111/bph.v174.11/issuetoc PMID:27685833
Nutraceuticals in hypercholesterolaemia: an overview.
Santini, Antonello; Novellino, Ettore
2017-06-01
Growing attention is now being given to the possible preventive/alternative ways to avoid illness onset. Changes in lifestyle and food habits are taking over from the conventional pharmaceutical-based approach, especially for chronic pathologies. Nutraceuticals have been proposed as key tools for the prevention and cure of some pathological conditions. This is leading research to develop new formulations based on these pharma-foods addressed in a specific way to prevent and cure health issues, which, in turn, will have an effect on therapy-related costs sustained by any National Health Organization. According to existing regulations, nutraceuticals cannot be categorized as either food or drugs but, by definition, often inhabit a grey area in between the two, being assimilated into food supplements, notwithstanding the beneficial properties that they can provide for some pathological conditions. A nutraceuticals-based approach for health management, in particular for some pathological conditions, has resulted in a worldwide growing 'nutraceutical' revolution. An outstanding example is the approach to the 'metabolic syndrome', which includes overweight, obesity and cardiovascular-related diseases, causing a sort of cascade of chronic health conditions, which is becoming a norm in modern life. Hypercholesterolaemia is one of these. It represents an example of a pathology that can be linked to both a poor lifestyle and dietary habits. The nutraceutical approach to hypercholesterolaemia is described in the present review as a possible alternative to the conventional drug-based therapy. This article is part of a themed section on Principles of Pharmacological Research of Nutraceuticals. To view the other articles in this section visit http://onlinelibrary.wiley.com/doi/10.1111/bph.v174.11/issuetoc. © 2016 The Authors. British Journal of Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.
NASA Astrophysics Data System (ADS)
Wang, Ten-See
1993-07-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust in the nozzle section and at the nozzle lip of the Space Transportation Systems Engine (STME), the potential burning of the turbine exhaust in the base region has caused tremendous concern. Two conventional approaches have been considered for predicting the base environment: (1) empirical approach, and (2) experimental approach. The empirical approach uses a combination of data correlations and semi-theoretical calculations. It works best for linear problems, simple physics and geometry. However, it is highly suspicious when complex geometry and flow physics are involved, especially when the subject is out of historical database. The experimental approach is often used to establish database for engineering analysis. However, it is qualitative at best for base flow problems. Other criticisms include the inability to simulate forebody boundary layer correctly, the interference effect from tunnel walls, and the inability to scale all pertinent parameters. Furthermore, there is a contention that the information extrapolated from subscale tests with combustion is not conservative. One potential alternative to the conventional methods is computational fluid dynamics (CFD), which has none of the above restrictions and is becoming more feasible due to maturing algorithms and advancing computer technology. It provides more details of the flowfield and is only limited by computer resources. However, it has its share of criticisms as a predictive tool for base environment. One major concern is that CFD has not been extensively tested for base flow problems. It is therefore imperative that CFD be assessed and benchmarked satisfactorily for base flows. In this study, the turbulent base flowfield of a experimental investigation for a four-engine clustered nozzle is numerically benchmarked using a pressure based CFD method. Since the cold air was the medium, accurate prediction of the base pressure distributions at high altitudes is the primary goal. Other factors which may influence the numerical results such as the effects of grid density, turbulence model, differencing scheme, and boundary conditions are also being addressed. Preliminary results of the computed base pressure agreed reasonably well with that of the measurement. Basic base flow features such as the reverse jet, wall jet, recompression shock, and static pressure field in plane of impingement have been captured.
Facial Emotion Recognition System – A Machine Learning Approach
NASA Astrophysics Data System (ADS)
Ramalingam, V. V.; Pandian, A.; Jayakumar, Lavanya
2018-04-01
Frown is a medium for people correlation and it could be exercised in multiple real systems. Single crucial stage for frown realizing is to exactly select hysterical aspects. This journal proposed a frown realization scheme applying transformative Particle Swarm Optimization (PSO) based aspect accumulation. This entity initially employs changed LVP, handles crisscross adjacent picture element contrast, for achieving the selective first frown portrayal. Then the PSO entity inserted with a concept of micro Genetic Algorithm (mGA) called mGA-embedded PSO designed for achieving aspect accumulation. This study, the technique subsumes no disposable memory, a little-populace insignificant flock, a latest acceleration that amends with the approach and a sub dimension-based in-depth local frown aspect examines. Assistance of provincial utilization and comprehensive inspection examine structure of alleviating of an immature concurrence complication of conventional PSO. Numerous identifiers are used to diagnose different frown expositions. Stationed on extensive study within and other-sphere pictures from the continued Cohn Kanade and MMI benchmark directory appropriately. Determination of the application exceeds most advanced level PSO variants, conventional PSO, classical GA and alternate relevant frown realization structures is described with powerful limit. Extending our accession to a motion based FER application for connecting patch-based Gabor aspects with continuous data in multi-frames.
A Direct Approach to In-Plane Stress Separation using Photoelastic Ptychography
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Kirkwood, Henry; Huwald, Eric; Nugent, Keith; Abbey, Brian
2016-08-01
The elastic properties of materials, either under external load or in a relaxed state, influence their mechanical behaviour. Conventional optical approaches based on techniques such as photoelasticity or thermoelasticity can be used for full-field analysis of the stress distribution within a specimen. The circular polariscope in combination with holographic photoelasticity allows the sum and difference of principal stress components to be determined by exploiting the temporary birefringent properties of materials under load. Phase stepping and interferometric techniques have been proposed as a method for separating the in-plane stress components in two-dimensional photoelasticity experiments. In this paper we describe and demonstrate an alternative approach based on photoelastic ptychography which is able to obtain quantitative stress information from far fewer measurements than is required for interferometric based approaches. The complex light intensity equations based on Jones calculus for this setup are derived. We then apply this approach to the problem of a disc under diametrical compression. The experimental results are validated against the analytical solution derived by Hertz for the theoretical displacement fields for an elastic disc subject to point loading.
A Direct Approach to In-Plane Stress Separation using Photoelastic Ptychography
Anthony, Nicholas; Cadenazzi, Guido; Kirkwood, Henry; Huwald, Eric; Nugent, Keith; Abbey, Brian
2016-01-01
The elastic properties of materials, either under external load or in a relaxed state, influence their mechanical behaviour. Conventional optical approaches based on techniques such as photoelasticity or thermoelasticity can be used for full-field analysis of the stress distribution within a specimen. The circular polariscope in combination with holographic photoelasticity allows the sum and difference of principal stress components to be determined by exploiting the temporary birefringent properties of materials under load. Phase stepping and interferometric techniques have been proposed as a method for separating the in-plane stress components in two-dimensional photoelasticity experiments. In this paper we describe and demonstrate an alternative approach based on photoelastic ptychography which is able to obtain quantitative stress information from far fewer measurements than is required for interferometric based approaches. The complex light intensity equations based on Jones calculus for this setup are derived. We then apply this approach to the problem of a disc under diametrical compression. The experimental results are validated against the analytical solution derived by Hertz for the theoretical displacement fields for an elastic disc subject to point loading. PMID:27488605
NASA Astrophysics Data System (ADS)
Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt
2018-06-01
The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
Cell force mapping using a double-sided micropillar array based on the moiré fringe method
NASA Astrophysics Data System (ADS)
Zhang, F.; Anderson, S.; Zheng, X.; Roberts, E.; Qiu, Y.; Liao, R.; Zhang, X.
2014-07-01
The mapping of traction forces is crucial to understanding the means by which cells regulate their behavior and physiological function to adapt to and communicate with their local microenvironment. To this end, polymeric micropillar arrays have been used for measuring cell traction force. However, the small scale of the micropillar deflections induced by cell traction forces results in highly inefficient force analyses using conventional optical approaches; in many cases, cell forces may be below the limits of detection achieved using conventional microscopy. To address these limitations, the moiré phenomenon has been leveraged as a visualization tool for cell force mapping due to its inherent magnification effect and capacity for whole-field force measurements. This Letter reports an optomechanical cell force sensor, namely, a double-sided micropillar array (DMPA) made of poly(dimethylsiloxane), on which one side is employed to support cultured living cells while the opposing side serves as a reference pattern for generating moiré patterns. The distance between the two sides, which is a crucial parameter influencing moiré pattern contrast, is predetermined during fabrication using theoretical calculations based on the Talbot effect that aim to optimize contrast. Herein, double-sided micropillar arrays were validated by mapping mouse embryo fibroblast contraction forces and the resulting force maps compared to conventional microscopy image analyses as the reference standard. The DMPA-based approach precludes the requirement for aligning two independent periodic substrates, improves moiré contrast, and enables efficient moiré pattern generation. Furthermore, the double-sided structure readily allows for the integration of moiré-based cell force mapping into microfabricated cell culture environments or lab-on-a-chip devices.
The current evidence for a biomarker-based approach in cancer of unknown primary.
El Rassy, Elie; Pavlidis, Nicholas
2018-05-02
Cancer of unknown primary (CUP) accounts for the seventh to eighth most frequently diagnosed cancer yet its prognosis remains poor with conventional chemotherapy. The spectrum of therapeutic management includes both locoregional and systemic therapy and should intend to offer optimal benefit to favorable CUP patients and palliative care to unfavorable cases. The recent molecular advances have revolutionized the armamentarium of cancer treatments though a biomarker-based approach. Unfortunately, solid data in CUP is lacking in the absence of a CUP-specific driver molecular signature. This prompted us to screen the medical literature for clinical data that evaluates the efficacy and safety of the biomarker-based approach in CUP patients. In this review, we will summarize the available evidence for the applicability of targeted therapies in CUP. Copyright © 2018 Elsevier Ltd. All rights reserved.
Thomas, Duncan C
2017-07-01
Screening behavior depends on previous screening history and family members' behaviors, which can act as both confounders and intermediate variables on a causal pathway from screening to disease risk. Conventional analyses that adjust for these variables can lead to incorrect inferences about the causal effect of screening if high-risk individuals are more likely to be screened. Analyzing the data in a manner that treats screening as randomized conditional on covariates allows causal parameters to be estimated; inverse probability weighting based on propensity of exposure scores is one such method considered here. I simulated family data under plausible models for the underlying disease process and for screening behavior to assess the performance of alternative methods of analysis and whether a targeted screening approach based on individuals' risk factors would lead to a greater reduction in cancer incidence in the population than a uniform screening policy. Simulation results indicate that there can be a substantial underestimation of the effect of screening on subsequent cancer risk when using conventional analysis approaches, which is avoided by using inverse probability weighting. A large case-control study of colonoscopy and colorectal cancer from Germany shows a strong protective effect of screening, but inverse probability weighting makes this effect even stronger. Targeted screening approaches based on either fixed risk factors or family history yield somewhat greater reductions in cancer incidence with fewer screens needed to prevent one cancer than population-wide approaches, but the differences may not be large enough to justify the additional effort required. See video abstract at, http://links.lww.com/EDE/B207.
He, M; Wang, H L; Yan, J Y; Xu, S W; Chen, W; Wang, J
2018-05-01
Objective: To compare the efficiency between the transhepatic hilar approach and conventional approach for the surgical treatment of Bismuth type Ⅲ and Ⅳ hilar cholangiocarcinoma. Methods: There were 42 consecutive patients with hilar cholangiocarcinoma of Bismuth type Ⅲ and Ⅳ who underwent surgical treatment at Department of Biliary-Pancreatic Surgery, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University from January 2008 to December 2013.The transhepatic hilar approach was used in 19 patients and conventional approach was performed in 23 patients.There were no differences in clinical parameters between the two groups(all P >0.05). The t-test was used to analyze the measurement data, and the χ(2) test was used to analyze the count data.Kaplan-Meier analysis was used to analyze the survival period.Multivariate COX regression analysis was used to analyze the prognosis factors. Results: Among the 19 patients who underwent transhepatic hilar approach, 3 patients changed the operative planning after reevaluated by exposing the hepatic hilus.The intraoperative blood was 300(250-400)ml in the transhepatic hilar approach group, which was significantly less than the conventional approach group, 800(450-1 300)ml( t =4.276, P =0.00 1), meanwhile, the R0 resection rate was significantly higher in the transhepatic hilar approach group than in the conventional approach group(89.4% vs . 52.2; χ(2)=6.773, P =0.009) and the 3-year and 5-year cumulative survival rate was better in the transhepatic hilar approach group than in the conventional approach group(63.2% vs . 47.8%, 26.3% vs . 0; χ(2)=66.363, 127.185, P =0.000). On univariate analysis, transhepatic hilar approach, intraoperative blood loss, intraoperative blood transfusion, R0 resection and lymph node metastasis were significant risk factors for patient survival(all P <0.05). On multivariate analysis, use of transhepatic hilar approach, intraoperative blood loss, R0 resection and lymph node metastasis were significant independent risk factors for patient survival(all P <0.05). Conclusion: The transhepatic hilar approach is the preferred technique for surgical treatment for hilar cholangiocarcinoma because it can improve accuracy of surgical planning, safety of operation, R0 resection rate and survival rate compared with the conventional approach.
Growing up and Growing out: Emerging Adults Learn Management through Service-Learning
ERIC Educational Resources Information Center
Fairfield, Kent D.
2010-01-01
This article describes a journey introducing service-learning based on large-scale projects in an undergraduate management curriculum, leading to supplementing this approach with more conventional small-group projects. It outlines some of the foundation for service-learning. Having students undertake a single class-wide project offers distinctive…
ERIC Educational Resources Information Center
Asare, Samuel; Daniel, Ben Kei
2018-01-01
Students' feedback on teaching activities significantly contributes to the enhancement of the quality of teaching and learning. Conventionally students evaluate teaching activities through paper based systems, where they fill out and return paper copies of teaching or course evaluation. In the last decades, institutions are moving student…
MAP-Motivated Carrier Synchronization of GMSK Based on the Laurent AMP Representation
NASA Technical Reports Server (NTRS)
Simon, M. K.
1998-01-01
Using the MAP estimation approach to carrier synchronization of digital modulations containing ISI together with a two pulse stream AMP representation of GMSK, it is possible to obtain an optimum closed loop configuration in the same manner as has been previously proposed for other conventional modulations with ISI.
Leap-frog-based BPM (LF-BPM) method for solving nanophotonic structures
NASA Astrophysics Data System (ADS)
Ayoub, Ahmad B.; Swillam, Mohamed A.
2018-02-01
In this paper, we propose an efficient approach to solve the BPM equation. By splitting the complex field into real and imaginary parts, the method is proved to be at least 30% faster than the conventional BPM. This method was tested on several optical components to test the accuracy.
ERIC Educational Resources Information Center
Riedler, Martina; Eryaman, Mustafa Yunus
2016-01-01
There is consensus in the literature that teacher education programs exhibit the characteristics of complex systems. These characteristics of teacher education programs as complex systems challenges the conventional, teacher-directed/ textbook-based positivist approaches in teacher education literature which has tried to reduce the complexities…
Automated Management Of Documents
NASA Technical Reports Server (NTRS)
Boy, Guy
1995-01-01
Report presents main technical issues involved in computer-integrated documentation. Problems associated with automation of management and maintenance of documents analyzed from perspectives of artificial intelligence and human factors. Technologies that may prove useful in computer-integrated documentation reviewed: these include conventional approaches to indexing and retrieval of information, use of hypertext, and knowledge-based artificial-intelligence systems.
The "Spread" of Merit-Based College Aid: Politics, Policy Consortia, and Interstate Competition
ERIC Educational Resources Information Center
Cohen-Vogel, Lora; Ingle, William Kyle; Levine, Amy Albee; Spence, Matthew
2008-01-01
Many political scientists maintain that public policies diffuse across states and that proximate states, in particular, influence one another's policy activities. Using state-funded merit aid for college as its case, this article takes a new approach to the study of the diffusion phenomenon, leaving behind conventional techniques used by…
Creating Democratic Citizenship through Drama Education: The Writings of Jonothan Neelands
ERIC Educational Resources Information Center
O'Connor, Peter, Ed.
2010-01-01
This selection of the seminal texts of Jonothan Neelands is essential reading for everyone involved in drama education. It showcases the classroom participatory democracy through ensemble based theatre education which Neelands developed over 25 years. Readers will find: (1) Neelands' development in the 1980s of the conventions approach which made…
A Protein in the Palm of Your Hand through Augmented Reality
ERIC Educational Resources Information Center
Berry, Colin; Board, Jason
2014-01-01
Understanding of proteins and other biological macromolecules must be based on an appreciation of their 3-dimensional shape and the fine details of their structure. Conveying these details in a clear and stimulating fashion can present challenges using conventional approaches and 2-dimensional monitors and projectors. Here we describe a method for…
A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.
1980-01-01
BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network
Hearing Silence: Toward a Mixed-Method Approach for Studying Genres' Exclusionary Potential
ERIC Educational Resources Information Center
Randazzo, Chalice
2015-01-01
Traditional Rhetorical Genre Study (RGS) methods are not well adapted to study exclusion because excluded information and people are typically absent from the genre, and some excluded information is simply unrelated to the genre because of genre conventions or social context. Within genre-based silences, how can scholars differentiate between an…
ERIC Educational Resources Information Center
Wolfgang, Jeff; Frazier, Kimberly; West-Olatunji, Cirecie; Barrett, Joe
2011-01-01
As counselors turn their attention to child-based counseling, there is a need to apply the core tenets of the discipline of counseling to young children and incorporate cross-cultural issues into clinical competence. Using Multicultural Counseling Theory (MCT), the authors discuss conventional approaches to providing clinical interventions for…
Heritability in Cognitive Performance: Evidence Using Computer-Based Testing
ERIC Educational Resources Information Center
Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas
2012-01-01
There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…
Effects of two citrus-based commercial herbicides on giant reed, Arundo donax L. (Poaceae)
USDA-ARS?s Scientific Manuscript database
The giant reed, Arundo donax L. (Poaceae), is an invasive weed pest in the United States and other parts of the world, particularly in riparian habitats where it can hinder the flow of water and choke out indigenous vegetation. Conventional approaches to controlling A. donax have not been particular...
Tapia-Orozco, Natalia; Santiago-Toledo, Gerardo; Barrón, Valeria; Espinosa-García, Ana María; García-García, José Antonio; García-Arrazola, Roeb
2017-04-01
Environmental Epigenomics is a developing field to study the epigenetic effect on human health from exposure to environmental factors. Endocrine disrupting chemicals have been detected primarily in pharmaceutical drugs, personal care products, food additives, and food containers. Exposure to endocrine-disrupting chemicals (EDCs) has been associated with a high incidence and prevalence of many endocrine-related disorders in humans. Nevertheless, further evidence is needed to establish a correlation between exposure to EDC and human disorders. Conventional detection of EDCs is based on chemical structure and concentration sample analysis. However, substantial evidence has emerged, suggesting that cell exposure to EDCs leads to epigenetic changes, independently of its chemical structure with non-monotonic low-dose responses. Consequently, a paradigm shift in toxicology assessment of EDCs is proposed based on a comprehensive review of analytical techniques used to evaluate the epigenetic effects. Fundamental insights reported elsewhere are compared in order to establish DNA methylation analysis as a viable method for assessing endocrine disruptors beyond the conventional study approach of chemical structure and concentration analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Fundamentals and techniques of nonimaging optics research
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1987-07-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line or trumpet concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. Present efforts can be classed into two main areas: (1) classical geometrical nonimaging optics, and (2) logical extensions of nonimaging concepts to the physical optics domain.
Fundamentals and techniques of nonimaging optics research at the University of Chicago
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1986-11-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. In the most recent phase, our efforts can be classed into two main areas; (a) ''classical'' geometrical nonimaging optics; and (b) logical extensions of nonimaging concepts to the physical optics domain.
NASA Technical Reports Server (NTRS)
Manson, S. S.; Halford, G. R.
1981-01-01
Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Hassib, Lamyaa
2005-06-01
Multicomponent polymer-based formulations of optical sensor materials are difficult and time consuming to optimize using conventional approaches. To address these challenges, our long-term goal is to determine relationships between sensor formulation and sensor response parameters using new scientific methodologies. As the first step, we have designed and implemented an automated analytical instrumentation infrastructure for combinatorial and high-throughput development of polymeric sensor materials for optical sensors. Our approach is based on the fabrication and performance screening of discrete and gradient sensor arrays. Simultaneous formation of multiple sensor coatings into discrete 4×6, 6×8, and 8×12 element arrays (3-15μL volume per element) and their screening provides not only a well-recognized acceleration in the screening rate, but also considerably reduces or even eliminates sources of variability, which are randomly affecting sensors response during a conventional one-at-a-time sensor coating evaluation. The application of gradient sensor arrays provides additional capabilities for rapid finding of the optimal formulation parameters.
Finite element analysis of container ship's cargo hold using ANSYS and POSEIDON software
NASA Astrophysics Data System (ADS)
Tanny, Tania Tamiz; Akter, Naznin; Amin, Osman Md.
2017-12-01
Nowadays ship structural analysis has become an integral part of the preliminary ship design providing further support for the development and detail design of ship structures. Structural analyses of container ship's cargo holds are carried out for the balancing of their safety and capacity, as those ships are exposed to the high risk of structural damage during voyage. Two different design methodologies have been considered for the structural analysis of a container ship's cargo hold. One is rule-based methodology and the other is a more conventional software based analyses. The rule based analysis is done by DNV-GL's software POSEIDON and the conventional package based analysis is done by ANSYS structural module. Both methods have been applied to analyze some of the mechanical properties of the model such as total deformation, stress-strain distribution, Von Mises stress, Fatigue etc., following different design bases and approaches, to indicate some guidance's for further improvements in ship structural design.
The effect of a novel minimally invasive strategy for infected necrotizing pancreatitis.
Tong, Zhihui; Shen, Xiao; Ke, Lu; Li, Gang; Zhou, Jing; Pan, Yiyuan; Li, Baiqiang; Yang, Dongliang; Li, Weiqin; Li, Jieshou
2017-11-01
Step-up approach consisting of multiple minimally invasive techniques has gradually become the mainstream for managing infected pancreatic necrosis (IPN). In the present study, we aimed to compare the safety and efficacy of a novel four-step approach and the conventional approach in managing IPN. According to the treatment strategy, consecutive patients fulfilling the inclusion criteria were put into two time intervals to conduct a before-and-after comparison: the conventional group (2010-2011) and the novel four-step group (2012-2013). The conventional group was essentially open necrosectomy for any patient who failed percutaneous drainage of infected necrosis. And the novel drainage approach consisted of four different steps including percutaneous drainage, negative pressure irrigation, endoscopic necrosectomy and open necrosectomy in sequence. The primary endpoint was major complications (new-onset organ failure, sepsis or local complications, etc.). Secondary endpoints included mortality during hospitalization, need of emergency surgery, duration of organ failure and sepsis, etc. Of the 229 recruited patients, 92 were treated with the conventional approach and the remaining 137 were managed with the novel four-step approach. New-onset major complications occurred in 72 patients (78.3%) in the two-step group and 75 patients (54.7%) in the four-step group (p < 0.001). For other important endpoints, although there was no statistical difference in mortality between the two groups (p = 0.403), significantly fewer patients in the four-step group required emergency surgery when compared with the conventional group [14.6% (20/137) vs. 45.6% (42/92), p < 0.001]. In addition, stratified analysis revealed that the four-step approach group presented significantly lower incidence of new-onset organ failure and other major complications in patients with the most severe type of AP. Comparing with the conventional approach, the novel four-step approach significantly reduced the rate of new-onset major complications and requirement of emergency operations in treating IPN, especially in those with the most severe type of acute pancreatitis.
A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques
NASA Astrophysics Data System (ADS)
Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk
While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.
Next-generation all-silica coatings for UV applications
NASA Astrophysics Data System (ADS)
Melninkaitis, A.; Grinevičiūtė, L.; Abromavičius, G.; Mažulė, L.; Smalakys, L.; Pupka, E.; Š čiuka, M.; Buzelis, R.; Kičas, S.
2017-11-01
Band-gap and refractive index are known as fundamental properties determining intrinsic optical resistance of multilayer dielectric coatings. By considering this fact we propose novel approach to manufacturing of interference thin films, based on artificial nano-structures of modulated porosity embedded in high band-gap matrix. Next generation all-silica mirrors were prepared by GLancing Angle Deposition (GLAD) using electron beam evaporation. High reflectivity (HR) was achieved by tailoring the porosity of highly resistant silica material during the thin film deposition process. Furthermore, the proposed approach was also demonstrated to work well in case of anti-reflection (AR) coatings. Conventional HR HfO2 and SiO2 as well as AR Al2O3 and SiO2 multilayers produced by Ion Beam Sputtering (IBS) were used as reference coatings. Damage performance of experimental coatings was also analyzed. All-silica based GLAD approach resulted in significant improvement of intrinsic laser damage resistance properties if compared to conventional coatings. Besides laser damage testing, other characteristics of experimental coatings are analyzed and discussed - reflectance, surface roughness and optical scattering. We believe that reported concept can be expanded to virtually any design of thin film coatings thus opening a new way of next generation highly resistant thin films well suited for high power and UV laser applications.
Large dynamic range terahertz spectrometers based on plasmonic photomixers (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wang, Ning; Javadi, Hamid; Jarrahi, Mona
2017-02-01
Heterodyne terahertz spectrometers are highly in demand for space explorations and astrophysics studies. A conventional heterodyne terahertz spectrometer consists of a terahertz mixer that mixes a received terahertz signal with a local oscillator signal to generate an intermediate frequency signal in the radio frequency (RF) range, where it can be easily processed and detected by RF electronics. Schottky diode mixers, superconductor-insulator-superconductor (SIS) mixers and hot electron bolometer (HEB) mixers are the most commonly used mixers in conventional heterodyne terahertz spectrometers. While conventional heterodyne terahertz spectrometers offer high spectral resolution and high detection sensitivity levels at cryogenic temperatures, their dynamic range and bandwidth are limited by the low radiation power of existing terahertz local oscillators and narrow bandwidth of existing terahertz mixers. To address these limitations, we present a novel approach for heterodyne terahertz spectrometry based on plasmonic photomixing. The presented design replaces terahertz mixer and local oscillator of conventional heterodyne terahertz spectrometers with a plasmonic photomixer pumped by an optical local oscillator. The optical local oscillator consists of two wavelength-tunable continuous-wave optical sources with a terahertz frequency difference. As a result, the spectrometry bandwidth and dynamic range of the presented heterodyne spectrometer is not limited by radiation frequency and power restrictions of conventional terahertz sources. We demonstrate a proof-of-concept terahertz spectrometer with more than 90 dB dynamic range and 1 THz spectrometry bandwidth.
Automated quantitative cytological analysis using portable microfluidic microscopy.
Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva
2016-06-01
In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P.; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition– and photolithography-based conventional metal patterning processes. The “digital” nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays. PMID:22900011
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition- and photolithography-based conventional metal patterning processes. The "digital" nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Qiancheng; Li, Hongyuan; Zhang, Yi
2018-04-01
In marine seismic exploration, the ghost energies (down-going waves), which arise from the reflection at the surface, are often treated as unwanted signals for data processing. The ghost wave fields interfere with the desired primary signals, leads to frequency notches and attenuation of low frequencies, which in turn downgrade the resolution of the recorded seismic data. There are two main categories of methods to solve the ghost or the so-called notch problem: the non-conventional acquisition configuration-based technique and a deghosting algorithm-based solution. The variable-depth streamer (VDS) acquisition solution is one of the most representative methods in the first category, which has become a popular solution for marine seismic acquisition to obtain broad data bandwidth. However, this approach is not as economic as the conventional constant depth streamer (CDS) acquisition, due to the precise control of the towing streamer. In addition, there are large quantities of conventionally-towed legacy data stored in the data library. Applying receiver deghosting to the CDS data thus becomes a more economical method. In theory, both types of data after deghosting should have the same bandwidth and S/N ratio, but in reality they are different. In this paper, we conduct a comparative study and evaluation to apply receiver deghosting to a set of real 2D marine data including both types of acquisition (CDS and VDS) corresponding to the same geology. The deghosting algorithm we employed is a self-sustained, inversion-based approach operated in the τ-p domain. This evaluation can help us to understand two questions: whether the VDS acquisition has more broadband characteristics compared to conventional CDS acquisition after deghosting, and whether we can achieve the identical or similar data quality (e.g., S/N ratio) through the proper deghosting algorithm for both types of data. The comparative results are illustrated and discussed.
Microbial Burden Approach : New Monitoring Approach for Measuring Microbial Burden
NASA Technical Reports Server (NTRS)
Venkateswaran, Kasthuri; Vaishampayan, Parag; Barmatz, Martin
2013-01-01
Advantages of new approach for differentiating live cells/ spores from dead cells/spores. Four examples of Salmonella outbreaks leading to costly destruction of dairy products. List of possible collaboration activities between JPL and other industries (for future discussion). Limitations of traditional microbial monitoring approaches. Introduction to new approach for rapid measurement of viable (live) bacterial cells/spores and its areas of application. Detailed example for determining live spores using new approach (similar procedure for determining live cells). JPL has developed a patented approach for measuring amount of live and dead cells/spores. This novel "molecular" method takes less than 5 to 7 hrs. compared to the seven days required using conventional techniques. Conventional "molecular" techniques can not discriminate live cells/spores among dead cells/spores. The JPL-developed novel method eliminates false positive results obtained from conventional "molecular" techniques that lead to unnecessary delay in the processing and to unnecessary destruction of food products.
NASA Astrophysics Data System (ADS)
Darma, I. K.
2018-01-01
This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.
Liquid-glass transition in equilibrium
NASA Astrophysics Data System (ADS)
Parisi, G.; Seoane, B.
2014-02-01
We show in numerical simulations that a system of two coupled replicas of a binary mixture of hard spheres undergoes a phase transition in equilibrium at a density slightly smaller than the glass transition density for an unreplicated system. This result is in agreement with the theories that predict that such a transition is a precursor of the standard ideal glass transition. The critical properties are compatible with those of an Ising system. The relations of this approach to the conventional approach based on configurational entropy are briefly discussed.
Development of immune-diagnostic reagents to diagnose bovine tuberculosis in cattle.
Vordermeier, H Martin; Jones, Gareth J; Buddle, Bryce M; Hewinson, R Glyn
2016-11-15
Bovine tuberculosis remains a major economic and animal welfare concern worldwide. As part of control strategies, cattle vaccination is being considered. This approach, used alongside conventional control policies, also requires the development of vaccine compatible diagnostic assays to distinguish infected from vaccinated animals (DIVA). In this review we discuss recent advances in DIVA development based on the detection of host cellular immune responses by blood testing or skin testing approaches. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Masevicius, Fabio D; Dubin, Arnaldo
2015-02-04
The Stewart approach-the application of basic physical-chemical principles of aqueous solutions to blood-is an appealing method for analyzing acid-base disorders. These principles mainly dictate that pH is determined by three independent variables, which change primarily and independently of one other. In blood plasma in vivo these variables are: (1) the PCO2; (2) the strong ion difference (SID)-the difference between the sums of all the strong (i.e., fully dissociated, chemically nonreacting) cations and all the strong anions; and (3) the nonvolatile weak acids (Atot). Accordingly, the pH and the bicarbonate levels (dependent variables) are only altered when one or more of the independent variables change. Moreover, the source of H(+) is the dissociation of water to maintain electroneutrality when the independent variables are modified. The basic principles of the Stewart approach in blood, however, have been challenged in different ways. First, the presumed independent variables are actually interdependent as occurs in situations such as: (1) the Hamburger effect (a chloride shift when CO2 is added to venous blood from the tissues); (2) the loss of Donnan equilibrium (a chloride shift from the interstitium to the intravascular compartment to balance the decrease of Atot secondary to capillary leak; and (3) the compensatory response to a primary disturbance in either independent variable. Second, the concept of water dissociation in response to changes in SID is controversial and lacks experimental evidence. In addition, the Stewart approach is not better than the conventional method for understanding acid-base disorders such as hyperchloremic metabolic acidosis secondary to a chloride-rich-fluid load. Finally, several attempts were performed to demonstrate the clinical superiority of the Stewart approach. These studies, however, have severe methodological drawbacks. In contrast, the largest study on this issue indicated the interchangeability of the Stewart and conventional methods. Although the introduction of the Stewart approach was a new insight into acid-base physiology, the method has not significantly improved our ability to understand, diagnose, and treat acid-base alterations in critically ill patients.
Multi-mode sliding mode control for precision linear stage based on fixed or floating stator.
Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei
2016-02-01
This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.
New Technologies for Rapid Bacterial Identification and Antibiotic Resistance Profiling.
Kelley, Shana O
2017-04-01
Conventional approaches to bacterial identification and drug susceptibility testing typically rely on culture-based approaches that take 2 to 7 days to return results. The long turnaround times contribute to the spread of infectious disease, negative patient outcomes, and the misuse of antibiotics that can contribute to antibiotic resistance. To provide new solutions enabling faster bacterial analysis, a variety of approaches are under development that leverage single-cell analysis, microfluidic concentration and detection strategies, and ultrasensitive readout mechanisms. This review discusses recent advances in this area and the potential of new technologies to enable more effective management of infectious disease.
Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi
2017-02-01
Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
USGS AK Gas Hydrate Assessment Team: Collett, Timothy S.; Agena, Warren F.; Lee, Myung Woong; Lewis, Kristen A.; Zyrianova, Margarita V.; Bird, Kenneth J.; Charpentier, Ronald R.; Cook, Troy A.; Houseknecht, David W.; Klett, Timothy R.; Pollastro, Richard M.
2014-01-01
Scientists with the U.S. Geological Survey have completed the first assessment of the undiscovered, technically recoverable gas hydrate resources beneath the North Slope of Alaska. This assessment indicates the existence of technically recoverable gas hydrate resources—that is, resources that can be discovered, developed, and produced using current technology. The approach used in this assessment followed standard geology-based USGS methodologies developed to assess conventional oil and gas resources. In order to use the USGS conventional assessment approach on gas hydrate resources, three-dimensional industry-acquired seismic data were analyzed. The analyses indicated that the gas hydrates on the North Slope occupy limited, discrete volumes of rock bounded by faults and downdip water contacts. This assessment approach also assumes that the resource can be produced by existing conventional technology, on the basis of limited field testing and numerical production models of gas hydrate-bearing reservoirs. The area assessed in northern Alaska extends from the National Petroleum Reserve in Alaska on the west through the Arctic National Wildlife Refuge on the east and from the Brooks Range northward to the State-Federal offshore boundary (located 3 miles north of the coastline). This area consists mostly of Federal, State, and Native lands covering 55,894 square miles. Using the standard geology-based assessment methodology, the USGS estimated that the total undiscovered technically recoverable natural-gas resources in gas hydrates in northern Alaska range between 25.2 and 157.8 trillion cubic feet, representing 95 percent and 5 percent probabilities of greater than these amounts, respectively, with a mean estimate of 85.4 trillion cubic feet.
Lab Plays Central Role in Groundbreaking National Clinical Trial in Precision Medicine | Poster
The Molecular Characterization Laboratory lies at the heart of an ambitious new approach for testing cancer drugs that will use the newest tools of precision medicine to select the best treatment for individual patients based on the genetic makeup of their tumors. The protocol, called NCI-Molecular Analysis for Therapy Choice (NCI-MATCH), will start with tumor biopsies from as many as 3,000 patients to see if they have genetic defects for which a targeted cancer drug is available. Cancers will be treated based on their genetic profiles rather than by their location in the body, which is the conventional approach.
Branes in Extended Spacetime: Brane Worldvolume Theory Based on Duality Symmetry.
Sakatani, Yuho; Uehara, Shozo
2016-11-04
We propose a novel approach to the brane worldvolume theory based on the geometry of extended field theories: double field theory and exceptional field theory. We demonstrate the effectiveness of this approach by showing that one can reproduce the conventional bosonic string and membrane actions, and the M5-brane action in the weak-field approximation. At a glance, the proposed 5-brane action without approximation looks different from the known M5-brane actions, but it is consistent with the known nonlinear self-duality relation, and it may provide a new formulation of a single M5-brane action. Actions for exotic branes are also discussed.
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
A Novel Range-Extended Strategy for Fuel Cell/Battery Electric Vehicles.
Hwang, Jenn-Jiang; Hu, Jia-Sheng; Lin, Chih-Hong
2015-01-01
The range-extended electric vehicle is proposed to improve the range anxiety drivers have of electric vehicles. Conventionally, a gasoline/diesel generator increases the range of an electric vehicle. Due to the zero-CO2 emission stipulations, utilizing fuel cells as generators raises concerns in society. This paper presents a novel charging strategy for fuel cell/battery electric vehicles. In comparison to the conventional switch control, a fuzzy control approach is employed to enhance the battery's state of charge (SOC). This approach improves the quick loss problem of the system's SOC and thus can achieve an extended driving range. Smooth steering experience and range extension are the main indexes for development of fuzzy rules, which are mainly based on the energy management in the urban driving model. Evaluation of the entire control system is performed by simulation, which demonstrates its effectiveness and feasibility.
Gunčar, Gregor; Wang, Ching-I A.; Forwood, Jade K.; Teh, Trazel; Catanzariti, Ann-Maree; Ellis, Jeffrey G.; Dodds, Peter N.; Kobe, Boštjan
2007-01-01
Metal-binding sites are ubiquitous in proteins and can be readily utilized for phasing. It is shown that a protein crystal structure can be solved using single-wavelength anomalous diffraction based on the anomalous signal of a cobalt ion measured on a conventional monochromatic X-ray source. The unique absorption edge of cobalt (1.61 Å) is compatible with the Cu Kα wavelength (1.54 Å) commonly available in macromolecular crystallography laboratories. This approach was applied to the determination of the structure of Melampsora lini avirulence protein AvrL567-A, a protein with a novel fold from the fungal pathogen flax rust that induces plant disease resistance in flax plants. This approach using cobalt ions may be applicable to all cobalt-binding proteins and may be advantageous when synchrotron radiation is not readily available. PMID:17329816
A Novel Range-Extended Strategy for Fuel Cell/Battery Electric Vehicles
Hwang, Jenn-Jiang; Lin, Chih-Hong
2015-01-01
The range-extended electric vehicle is proposed to improve the range anxiety drivers have of electric vehicles. Conventionally, a gasoline/diesel generator increases the range of an electric vehicle. Due to the zero-CO2 emission stipulations, utilizing fuel cells as generators raises concerns in society. This paper presents a novel charging strategy for fuel cell/battery electric vehicles. In comparison to the conventional switch control, a fuzzy control approach is employed to enhance the battery's state of charge (SOC). This approach improves the quick loss problem of the system's SOC and thus can achieve an extended driving range. Smooth steering experience and range extension are the main indexes for development of fuzzy rules, which are mainly based on the energy management in the urban driving model. Evaluation of the entire control system is performed by simulation, which demonstrates its effectiveness and feasibility. PMID:26236771
Unambiguous formalism for higher order Lagrangian field theories
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; de León, Manuel; Martín de Diego, David; Vankerschaver, Joris
2009-11-01
The aim of this paper is to propose an unambiguous intrinsic formalism for higher order field theories which avoids the arbitrariness in the generalization of the conventional description of field theories, and implies the existence of different Cartan forms and Legendre transformations. We propose a differential-geometric setting for the dynamics of a higher order field theory, based on the Skinner and Rusk formalism for mechanics. This approach incorporates aspects of both the Lagrangian and the Hamiltonian description, since the field equations are formulated using the Lagrangian on a higher order jet bundle and the canonical multisymplectic form on its affine dual. As both of these objects are uniquely defined, the Skinner-Rusk approach has the advantage that it does not suffer from the arbitrariness in conventional descriptions. The result is that we obtain a unique and global intrinsic version of the Euler-Lagrange equations for higher order field theories. Several examples illustrate our construction.
Boonsiriseth, K; Sirintawat, N; Arunakul, K; Wongsirichat, N
2013-07-01
This study aimed to evaluate the efficacy of anesthesia obtained with a novel injection approach for inferior alveolar nerve block compared with the conventional injection approach. 40 patients in good health, randomly received each of two injection approaches of local anesthetic on each side of the mandible at two separate appointments. A sharp probe and an electric pulp tester were used to test anesthesia before injection, after injection when the patients' sensation changed, and 5 min after injection. This study comprised positive aspiration and intravascular injection 5% and neurovascular bundle injection 7.5% in the conventional inferior alveolar nerve block, but without occurrence in the novel injection approach. A visual analog scale (VAS) pain assessment was used during injection and surgery. The significance level used in the statistical analysis was p<0.05. For the novel injection approach compared with the conventional injection approach, no significant difference was found on the subjective onset, objective onset, operation time, duration of anesthesia and VAS pain score during operation, but the VAS pain score during injection was significantly different. The efficacy of inferior alveolar nerve block by the novel injection approach provided adequate anesthesia and caused less pain and greater safety during injection. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
[New anterolateral approach of distal femur for treatment of distal femoral fractures].
Zhang, Bin; Dai, Min; Zou, Fan; Luo, Song; Li, Binhua; Qiu, Ping; Nie, Tao
2013-11-01
To assess the effectiveness of the new anterolateral approach of the distal femur for the treatment of distal femoral fractures. Between July 2007 and December 2009, 58 patients with distal femoral fractures were treated by new anterolateral approach of the distal femur in 28 patients (new approach group) and by conventional approach in 30 patients (conventional approach group). There was no significant difference in gender, age, cause of injury, affected side, type of fracture, disease duration, complication, or preoperative intervention (P > 0.05). The operation time, intraoperative blood loss, intraoperative fluoroscopy frequency, hospitalization days, and Hospital for Special Surgery (HSS) score of knee were recorded. Operation was successfully completed in all patients of 2 groups, and healing of incision by first intention was obtained; no vascular and nerves injuries occurred. The operation time and intraoperative fluoroscopy frequency of new approach group were significantly less than those of conventional approach group (P < 0.05). But the intraoperative blood loss and the hospitalization days showed no significant difference between 2 groups (P > 0.05). All patients were followed up 12-36 months (mean, 19.8 months). Bone union was shown on X-ray films; the fracture healing time was (12.62 +/- 2.34) weeks in the new approach group and was (13.78 +/- 1.94) weeks in the conventional approach group, showing no significant difference (t=2.78, P=0.10). The knee HSS score at last follow-up was 94.4 +/- 4.2 in the new approach group, and was 89.2 +/- 6.0 in the conventional approach group, showing significant difference between 2 groups (t=3.85, P=0.00). New anterolateral approach of the distal femur for distal femoral fractures has the advantages of exposure plenitude, minimal tissue trauma, and early function rehabilitation training so as to enhance the function recovery of knee joint.
Performance analysis of a finite radon transform in OFDM system under different channel models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A.
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resultedmore » in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.« less
Performance analysis of a finite radon transform in OFDM system under different channel models
NASA Astrophysics Data System (ADS)
Dawood, Sameer A.; Malek, F.; Anuar, M. S.; Fayadh, Rashid A.; Abdullah, Farrah Salwani
2015-05-01
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...
2017-05-03
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Heuts, Samuel; Maessen, Jos G; Sardari Nia, Peyman
2016-05-01
With the emergence of a new concept aimed at individualization of patient care, the focus will shift from whether a minimally invasive procedure is better than conventional treatment, to the question of which patients will benefit most from which technique? The superiority of minimally invasive valve surgery (MIVS) has not yet been proved. We believe that through better patient selection advantages of this technique can become more pronounced. In our current study, we evaluate the feasibility of 3D computed tomography (CT) imaging reconstruction in the preoperative planning of patients referred for MIVS. We retrospectively analysed all consecutive patients who were referred for minimally invasive mitral valve surgery (MIMVS) and minimally invasive aortic valve replacement (MIAVR) to a single surgeon in a tertiary referral centre for MIVS between March 2014 and 2015. Prospective preoperative planning was done for all patients and was based on evaluations by a multidisciplinary heart-team, an echocardiography, conventional CT images and 3D CT reconstruction models. A total of 39 patients were included in our study; 16 for mitral valve surgery (MVS) and 23 patients for aortic valve replacement (AVR). Eleven patients (69%) within the MVS group underwent MIMVS. Five patients (31%) underwent conventional MVS. Findings leading to exclusion for MIMVS were a tortuous or slender femoro-iliac tract, calcification of the aortic bifurcation, aortic elongation and pericardial calcifications. Furthermore, 2 patients had a change of operative strategy based on preoperative planning. Seventeen (74%) patients in the AVR group underwent MIAVR. Six patients (26%) underwent conventional AVR. Indications for conventional AVR instead of MIAVR were an elongated ascending aorta, ascending aortic calcification and ascending aortic dilatation. One patient (6%) in the MIAVR group was converted to a sternotomy due to excessive intraoperative bleeding. Two mortalities were reported during conventional MVS. There were no mortalities reported in the MIMVS, MIAVR or conventional AVR group. Preoperative planning of minimally invasive left-sided valve surgery with 3D CT reconstruction models is a useful and feasible method to determine operative strategy and exclude patients ineligible for a minimally invasive approach, thus potentially preventing complications. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping
2018-01-01
Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.
Denoising embolic Doppler ultrasound signals using Dual Tree Complex Discrete Wavelet Transform.
Serbes, Gorkem; Aydin, Nizamettin
2010-01-01
Early and accurate detection of asymptomatic emboli is important for monitoring of preventive therapy in stroke-prone patients. One of the problems in detection of emboli is the identification of an embolic signal caused by very small emboli. The amplitude of the embolic signal may be so small that advanced processing methods are required to distinguish these signals from Doppler signals arising from red blood cells. In this study instead of conventional discrete wavelet transform, the Dual Tree Complex Discrete Wavelet Transform was used for denoising embolic signals. Performances of both approaches were compared. Unlike the conventional discrete wavelet transform discrete complex wavelet transform is a shift invariant transform with limited redundancy. Results demonstrate that the Dual Tree Complex Discrete Wavelet Transform based denoising outperforms conventional discrete wavelet denoising. Approximately 8 dB improvement is obtained by using the Dual Tree Complex Discrete Wavelet Transform compared to the improvement provided by the conventional Discrete Wavelet Transform (less than 5 dB).
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Radiation shielding design of a new tomotherapy facility.
Zacarias, Albert; Balog, John; Mills, Michael
2006-10-01
It is expected that intensity modulated radiation therapy (IMRT) and image guided radiation therapy (IGRT) will replace a large portion of radiation therapy treatments currently performed with conventional MLC-based 3D conformal techniques. IGRT may become the standard of treatment in the future for prostate and head and neck cancer. Many established facilities may convert existing vaults to perform this treatment method using new or upgraded equipment. In the future, more facilities undoubtedly will be considering de novo designs for their treatment vaults. A reevaluation of the design principles used in conventional vault design is of benefit to those considering this approach with a new tomotherapy facility. This is made more imperative as the design of the TomoTherapy system is unique in several aspects and does not fit well into the formalism of NCRP 49 for a conventional linear accelerator.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing, and Stress Analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert (Technical Monitor); Litvin, Faydor L.; Gonzalez-Perez, Ignacio; Carnevali, Luca; Kawasaki, Kazumasa; Fuentes-Aznar, Alfonso
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of aligment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
Direct cost of monitoring conventional hemodialysis conducted by nursing professionals.
Lima, Antônio Fernandes Costa
2017-04-01
to analyze the mean direct cost of conventional hemodialysis monitored by nursing professionals in three public teaching and research hospitals in the state of São Paulo, Brazil. this was a quantitative, explorative and descriptive investigation, based on a multiple case study approach. The mean direct cost was calculated by multiplying (clocked) time spent per procedure by the unit cost of direct labor. Values were calculated in Brazilian real (BRL). Hospital C presented the highest mean direct cost (BRL 184.52), 5.23 times greater than the value for Hospital A (BRL 35.29) and 3.91 times greater than Hospital B (BRL 47.22). the costing method used in this study can be reproduced at other dialysis centers to inform strategies aimed at efficient allocation of necessary human resources to successfully monitor conventional hemodialysis.
Modified Involute Helical Gears: Computerized Design, Simulation of Meshing and Stress Analysis
NASA Technical Reports Server (NTRS)
2003-01-01
The computerized design, methods for generation, simulation of meshing, and enhanced stress analysis of modified involute helical gears is presented. The approaches proposed for modification of conventional involute helical gears are based on conjugation of double-crowned pinion with a conventional helical involute gear. Double-crowning of the pinion means deviation of cross-profile from an involute one and deviation in longitudinal direction from a helicoid surface. Using the method developed, the pinion-gear tooth surfaces are in point-contact, the bearing contact is localized and oriented longitudinally, and edge contact is avoided. Also, the influence of errors of alignment on the shift of bearing contact, vibration, and noise are reduced substantially. The theory developed is illustrated with numerical examples that confirm the advantages of the gear drives of the modified geometry in comparison with conventional helical involute gears.
Subhedar, Preeti B; Ray, Pearl; Gogate, Parag R
2018-01-01
The present work deals with intensification of delignification and subsequent enzymatic hydrolysis of sustainable biomass such as groundnut shells, coconut coir and pistachio shells using ultrasound assisted approach so as to develop an economical approach for obtaining bioethanol. Process intensification, in the current context, is referred to as any improvements giving enhanced rates possibly with lower energy and chemical as well as enzyme requirement for delignification and hydrolysis respectively. Conventional processing for both delignification and enzymatic hydrolysis has also been investigated for establishing the degree of intensification. The obtained results for delignification of biomass established that for conventional alkaline treatment, the extent of delignification for the case of groundnut shells, coconut coir and pistachio shells were 41.8, 45.9 and 38% which increased to 71.1, 89.5 and 78.9% respectively giving almost 80-100% increase for the ultrasound assisted approach. Under optimized conditions, the conventional approach resulted in reducing sugar yields as 10.2, 12.1 and 8.1g/L for groundnut shells, coconut coir and pistachio shells respectively whereas for the case of ultrasound-assisted enzymatic hydrolysis, the obtained yields were 21.3, 23.9 and 18.4g/L in same order of biomass. The material samples were characterized by several characterization techniques for establishing the morphological changes obtained due to the use of ultrasound which were found to be favorable for enhanced delignification and hydrolysis for the ultrasound assisted approach. Overall, the results of this work establish the process intensification benefits due to the application of ultrasound for different sustainable biomass with mechanistic understanding based on the morphological analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
An Efficient Buyer-Seller Watermarking Protocol Based on Chameleon Encryption
NASA Astrophysics Data System (ADS)
Poh, Geong Sen; Martin, Keith M.
Buyer-seller watermarking protocols are designed to deter clients from illegally distributing copies of digital content. This is achieved by allowing a distributor to insert a unique watermark into content in such a way that the distributor does not know the final watermarked copy that is given to the client. This protects both the client and distributor from attempts by one to falsely accuse the other of misuse. Buyer-seller watermarking protocols are normally based on asymmetric cryptographic primitives known as homomorphic encryption schemes. However, the computational and communication overhead of this conventional approach is high. In this paper we propose a different approach, based on the symmetric Chameleon encryption scheme. We show that this leads to significant gains in computational and operational efficiency.
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
Becirovic, Vedada; Doonan, Steven R.; Martin, R. Scott
2013-01-01
In this paper, an approach to fabricate epoxy or polystyrene microdevices with encapsulated tubing and electrodes is described. Key features of this approach include a fixed alignment between the fluidic tubing and electrodes, the ability to polish the device when desired, and the low dead volume nature of the fluidic interconnects. It is shown that a variety of tubing can be encapsulated with this approach, including fused silica capillary, polyetheretherketone (PEEK), and perfluoroalkoxy (PFA), with the resulting tubing/microchip interface not leading to significant band broadening or plug dilution. The applicability of the devices with embedded tubing is demonstrated by integrating several off-chip analytical methods to the microchip. This includes droplet transfer, droplet desegmentation, and microchip-based flow injection analysis. Off-chip generated droplets can be transferred to the microchip with minimal coalescence, while flow injection studies showed improved peak shape and sensitivity when compared to the use of fluidic interconnects with an appreciable dead volume. Importantly, it is shown that this low dead volume approach can be extended to also enable the integration of conventional capillary electrophoresis (CE) with electrochemical detection. This is accomplished by embedding fused silica capillary along with palladium (for grounding the electrophoresis voltage) and platinum (for detection) electrodes. With this approach, up to 128,000 theoretical plates for dopamine was possible. In all cases, the tubing and electrodes are housed in a rigid base; this results in extremely robust devices that will be of interest to researchers wanting to develop microchips for use by non-experts. PMID:24159363
Becirovic, Vedada; Doonan, Steven R; Martin, R Scott
2013-08-21
In this paper, an approach to fabricate epoxy or polystyrene microdevices with encapsulated tubing and electrodes is described. Key features of this approach include a fixed alignment between the fluidic tubing and electrodes, the ability to polish the device when desired, and the low dead volume nature of the fluidic interconnects. It is shown that a variety of tubing can be encapsulated with this approach, including fused silica capillary, polyetheretherketone (PEEK), and perfluoroalkoxy (PFA), with the resulting tubing/microchip interface not leading to significant band broadening or plug dilution. The applicability of the devices with embedded tubing is demonstrated by integrating several off-chip analytical methods to the microchip. This includes droplet transfer, droplet desegmentation, and microchip-based flow injection analysis. Off-chip generated droplets can be transferred to the microchip with minimal coalescence, while flow injection studies showed improved peak shape and sensitivity when compared to the use of fluidic interconnects with an appreciable dead volume. Importantly, it is shown that this low dead volume approach can be extended to also enable the integration of conventional capillary electrophoresis (CE) with electrochemical detection. This is accomplished by embedding fused silica capillary along with palladium (for grounding the electrophoresis voltage) and platinum (for detection) electrodes. With this approach, up to 128,000 theoretical plates for dopamine was possible. In all cases, the tubing and electrodes are housed in a rigid base; this results in extremely robust devices that will be of interest to researchers wanting to develop microchips for use by non-experts.
Characterization of the bout durations of sleep and wakefulness.
McShane, Blakeley B; Galante, Raymond J; Jensen, Shane T; Naidoo, Nirinjini; Pack, Allan I; Wyner, Abraham
2010-11-30
(a) Develop a new statistical approach to describe the microarchitecture of wakefulness and sleep in mice; (b) evaluate differences among inbred strains in this microarchitecture; (c) compare results when data are scored in 4-s versus 10-s epochs. Studies in male mice of four inbred strains: AJ, C57BL/6, DBA and PWD. EEG/EMG were recorded for 24h and scored independently in 4-s and 10-s epochs. Distribution of bout durations of wakefulness, NREM and REM sleep in mice has two distinct components, i.e., short and longer bouts. This is described as a spike (short bouts) and slab (longer bouts) distribution, a particular type of mixture model. The distribution in any state depends on the state the mouse is transitioning from and can be characterized by three parameters: the number of such bouts conditional on the previous state, the size of the spike, and the average length of the slab. While conventional statistics such as time spent in state, average bout duration, and number of bouts show some differences between inbred strains, this new statistical approach reveals more major differences. The major difference between strains is their ability to sustain long bouts of NREM sleep or wakefulness. Scoring mouse sleep/wake in 4-s epochs offered little new information when using conventional metrics but did when evaluating the microarchitecture based on this new approach. Standard statistical approaches do not adequately characterize the microarchitecture of mouse behavioral state. Approaches based on a spike-and-slab provide a quantitative description. Copyright © 2010 Elsevier B.V. All rights reserved.
Virtual rehabilitation--benefits and challenges.
Burdea, G C
2003-01-01
To discuss the advantages and disadvantages of rehabilitation applications of virtual reality. VR can be used as an enhancement to conventional therapy for patients with conditions ranging from musculoskeletal problems, to stroke-induced paralysis, to cognitive deficits. This approach is called "VR-augmented rehabilitation." Alternately, VR can replace conventional interventions altogether, in which case the rehabilitation is "VR-based." If the intervention is done at a distance, then it is called "telerehabilitation." Simulation exercises for post-stroke patients have been developed using a "teacher object" approach or a video game approach. Simulations for musculo-skeletal patients use virtual replicas of rehabilitation devices (such as rubber ball, power putty, peg board). Phobia-inducing virtual environments are prescribed for patients with cognitive deficits. VR-augmented rehabilitation has been shown effective for stroke patients in the chronic phase of the disease. VR-based rehabilitation has been improving patients with fear of flying, Vietnam syndrome, fear of heights, and chronic stroke patients. Telerehabilitation interventions using VR have improved musculo-skeletal and post-stroke patients, however less data is available at this time. Virtual reality presents significant advantages when applied to rehabilitation of patients with varied conditions. These advantages include patient motivation, adaptability and variability based on patient baseline, transparent data storage, online remote data access, economy of scale, reduced medical costs. Challenges in VR use for rehabilitation relate to lack of computer skills on the part of therapists, lack of support infrastructure, expensive equipment (initially), inadequate communication infrastructure (for telerehabilitation in rural areas), and patient safety concerns.
Sound Power Estimation for Beam and Plate Structures Using Polyvinylidene Fluoride Films as Sensors
Mao, Qibo; Zhong, Haibing
2017-01-01
The theory for calculation and/or measurement of sound power based on the classical velocity-based radiation mode (V-mode) approach is well established for planar structures. However, the current V-mode theory is limited in scope in that it can only be applied to conventional motion sensors (i.e., accelerometers). In this study, in order to estimate the sound power of vibrating beam and plate structure by using polyvinylidene fluoride (PVDF) films as sensors, a PVDF-based radiation mode (C-mode) approach concept is introduced to determine the sound power radiation from the output signals of PVDF films of the vibrating structure. The proposed method is a hybrid of vibration measurement and numerical calculation of C-modes. The proposed C-mode approach has the following advantages: (1) compared to conventional motion sensors, the PVDF films are lightweight, flexible, and low-cost; (2) there is no need for special measuring environments, since the proposed method does not require the measurement of sound fields; (3) In low frequency range (typically with dimensionless frequency kl < 4), the radiation efficiencies of the C-modes fall off very rapidly with increasing mode order, furthermore, the shapes of the C-modes remain almost unchanged, which means that the computation load can be significantly reduced due to the fact only the first few dominant C-modes are involved in the low frequency range. Numerical simulations and experimental investigations were carried out to verify the accuracy and efficiency of the proposed method. PMID:28509870
Phase space quantum mechanics - Direct
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasiri, S.; Sobouti, Y.; Taati, F.
2006-09-15
Conventional approach to quantum mechanics in phase space (q,p), is to take the operator based quantum mechanics of Schroedinger, or an equivalent, and assign a c-number function in phase space to it. We propose to begin with a higher level of abstraction, in which the independence and the symmetric role of q and p is maintained throughout, and at once arrive at phase space state functions. Upon reduction to the q- or p-space the proposed formalism gives the conventional quantum mechanics, however, with a definite rule for ordering of factors of noncommuting observables. Further conceptual and practical merits of themore » formalism are demonstrated throughout the text.« less
Liu, S X; Zou, M S
2018-03-01
The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.
NASA Technical Reports Server (NTRS)
Whittenberger, J. Daniel
2001-01-01
Present structural concepts for hot static structures are conventional "sheet & stringer" or truss core construction. More weight-efficient concepts such as honeycomb and lattice block are being investigated, in combination with both conventional superalloys and TiAl. Development efforts for components made from TiAl sheet are centered on lower cost methods for sheet and foil production, plus alloy development for higher temperature capability. A low-cost casting technology recently developed for aluminum and steel lattice blocks has demonstrated the required higher strength and stiffness, with weight efficiency approach- ing honeycombs. The current effort is based on extending the temperature capability by developing lattice block materials made from IN-718 and Mar-M247.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
NASA Astrophysics Data System (ADS)
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Festa, Stefano; Zerboni, Giulia; Aratari, Annalisa; Ballanti, Riccardo; Papi, Claudio
2018-01-01
Inflammatory bowel diseases, Crohn's disease and ulcerative colitis are chronic relapsing conditions that may result in progressive bowel damage, high risk of complications, surgery and permanent disability. The conventional therapeutic approach for inflammatory bowel diseases is based mainly on symptom control. Unfortunately, a symptom-based therapeutic approach has little impact on major long-term disease outcomes. In other chronic disabling conditions such as diabetes, hypertension and rheumatoid arthritis, the development of new therapeutic approaches has led to better outcomes. In this context a "treat to target" strategy has been developed. This strategy is based on identification of high-risk patients, regular assessment of disease activity by means of objective measures, adjustment of treatment to reach the pre-defined target. A treat to target approach has recently been proposed for inflammatory bowel disease with the aim at modifying the natural history of the disease. In this review, the evidence and the limitations of the treat to target paradigm in inflammatory bowel disease are analyzed and discussed.
Coupled thermomechanical behavior of graphene using the spring-based finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgantzinos, S. K., E-mail: sgeor@mech.upatras.gr; Anifantis, N. K., E-mail: nanif@mech.upatras.gr; Giannopoulos, G. I., E-mail: ggiannopoulos@teiwest.gr
The prediction of the thermomechanical behavior of graphene using a new coupled thermomechanical spring-based finite element approach is the aim of this work. Graphene sheets are modeled in nanoscale according to their atomistic structure. Based on molecular theory, the potential energy is defined as a function of temperature, describing the interatomic interactions in different temperature environments. The force field is approached by suitable straight spring finite elements. Springs simulate the interatomic interactions and interconnect nodes located at the atomic positions. Their stiffness matrix is expressed as a function of temperature. By using appropriate boundary conditions, various different graphene configurations aremore » analyzed and their thermo-mechanical response is approached using conventional finite element procedures. A complete parametric study with respect to the geometric characteristics of graphene is performed, and the temperature dependency of the elastic material properties is finally predicted. Comparisons with available published works found in the literature demonstrate the accuracy of the proposed method.« less
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Measuring Disability: Comparing the Impact of Two Data Collection Approaches on Disability Rates
Sabariego, Carla; Oberhauser, Cornelia; Posarac, Aleksandra; Bickenbach, Jerome; Kostanjsek, Nenad; Chatterji, Somnath; Officer, Alana; Coenen, Michaela; Chhan, Lay; Cieza, Alarcos
2015-01-01
The usual approach in disability surveys is to screen persons with disability upfront and then ask questions about everyday problems. The objectives of this paper are to demonstrate the impact of screeners on disability rates, to challenge the usual exclusion of persons with mild and moderate disability from disability surveys and to demonstrate the advantage of using an a posteriori cut-off. Using data of a pilot study of the WHO Model Disability Survey (MDS) in Cambodia and the polytomous Rasch model, metric scales of disability were built. The conventional screener approach based on the short disability module of the Washington City Group and the a posteriori cut-off method described in the World Disability Report were compared regarding disability rates. The screener led to imprecise rates and classified persons with mild to moderate disability as non-disabled, although these respondents already experienced important problems in daily life. The a posteriori cut-off applied to the general population sample led to a more precise disability rate and allowed for a differentiation of the performance and needs of persons with mild, moderate and severe disability. This approach can be therefore considered as an inclusive approach suitable to monitor the Convention on the Rights of Persons with Disabilities. PMID:26308039
Satellite-based PM concentrations and their application to COPD in Cleveland, OH
Kumar, Naresh; Liang, Dong; Comellas, Alejandro; Chu, Allen D.; Abrams, Thad
2014-01-01
A hybrid approach is proposed to estimate exposure to fine particulate matter (PM2.5) at a given location and time. This approach builds on satellite-based aerosol optical depth (AOD), air pollution data from sparsely distributed Environmental Protection Agency (EPA) sites and local time–space Kriging, an optimal interpolation technique. Given the daily global coverage of AOD data, we can develop daily estimate of air quality at any given location and time. This can assure unprecedented spatial coverage, needed for air quality surveillance and management and epidemiological studies. In this paper, we developed an empirical relationship between the 2 km AOD and PM2.5 data from EPA sites. Extrapolating this relationship to the study domain resulted in 2.3 million predictions of PM2.5 between 2000 and 2009 in Cleveland Metropolitan Statistical Area (MSA). We have developed local time–space Kriging to compute exposure at a given location and time using the predicted PM2.5. Daily estimates of PM2.5 were developed for Cleveland MSA between 2000 and 2009 at 2.5 km spatial resolution; 1.7 million (~79.8%) of 2.13 million predictions required for multiyear and geographic domain were robust. In the epidemiological application of the hybrid approach, admissions for an acute exacerbation of chronic obstructive pulmonary disease (AECOPD) was examined with respect to time–space lagged PM2.5 exposure. Our analysis suggests that the risk of AECOPD increases 2.3% with a unit increase in PM2.5 exposure within 9 days and 0.05° (~5 km) distance lags. In the aggregated analysis, the exposed groups (who experienced exposure to PM2.5 >15.4 μg/m3) were 54% more likely to be admitted for AECOPD than the reference group. The hybrid approach offers greater spatiotemporal coverage and reliable characterization of ambient concentration than conventional in situ monitoring-based approaches. Thus, this approach can potentially reduce exposure misclassification errors in the conventional air pollution epidemiology studies. PMID:24045428
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.
1993-01-01
An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
A learning framework for age rank estimation based on face images with scattering transform.
Chang, Kuang-Yu; Chen, Chu-Song
2015-03-01
This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.
Student Orientations to Independent Learning.
ERIC Educational Resources Information Center
Jones, Alice; Jones, Douglas
1996-01-01
A study investigated the relationship of 46 college students' preferred teaching method (conventional lecture versus independent study package) and their own approaches to study (surface, deep, achieving). Results indicated that while students preferred the conventional lecture method, preference did not correlate with their study approach and…
Ding, Yongxia; Zhang, Peili
2018-06-12
Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P < 0.01) compared with the control group. In addition, 92.6% of students in the experimental group expressed satisfaction with the new web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.
A New SEYHAN's Approach in Case of Heterogeneity of Regression Slopes in ANCOVA.
Ankarali, Handan; Cangur, Sengul; Ankarali, Seyit
2018-06-01
In this study, when the assumptions of linearity and homogeneity of regression slopes of conventional ANCOVA are not met, a new approach named as SEYHAN has been suggested to use conventional ANCOVA instead of robust or nonlinear ANCOVA. The proposed SEYHAN's approach involves transformation of continuous covariate into categorical structure when the relationship between covariate and dependent variable is nonlinear and the regression slopes are not homogenous. A simulated data set was used to explain SEYHAN's approach. In this approach, we performed conventional ANCOVA in each subgroup which is constituted according to knot values and analysis of variance with two-factor model after MARS method was used for categorization of covariate. The first model is a simpler model than the second model that includes interaction term. Since the model with interaction effect has more subjects, the power of test also increases and the existing significant difference is revealed better. We can say that linearity and homogeneity of regression slopes are not problem for data analysis by conventional linear ANCOVA model by helping this approach. It can be used fast and efficiently for the presence of one or more covariates.
Zhang, Wei Yun; Zhang, Wenhua; Liu, Zhiyuan; Li, Cong; Zhu, Zhi; Yang, Chaoyong James
2012-01-03
We have developed a novel method for efficiently screening affinity ligands (aptamers) from a complex single-stranded DNA (ssDNA) library by employing single-molecule emulsion polymerase chain reaction (PCR) based on the agarose droplet microfluidic technology. In a typical systematic evolution of ligands by exponential enrichment (SELEX) process, the enriched library is sequenced first, and tens to hundreds of aptamer candidates are analyzed via a bioinformatic approach. Possible candidates are then chemically synthesized, and their binding affinities are measured individually. Such a process is time-consuming, labor-intensive, inefficient, and expensive. To address these problems, we have developed a highly efficient single-molecule approach for aptamer screening using our agarose droplet microfluidic technology. Statistically diluted ssDNA of the pre-enriched library evolved through conventional SELEX against cancer biomarker Shp2 protein was encapsulated into individual uniform agarose droplets for droplet PCR to generate clonal agarose beads. The binding capacity of amplified ssDNA from each clonal bead was then screened via high-throughput fluorescence cytometry. DNA clones with high binding capacity and low K(d) were chosen as the aptamer and can be directly used for downstream biomedical applications. We have identified an ssDNA aptamer that selectively recognizes Shp2 with a K(d) of 24.9 nM. Compared to a conventional sequencing-chemical synthesis-screening work flow, our approach avoids large-scale DNA sequencing and expensive, time-consuming DNA synthesis of large populations of DNA candidates. The agarose droplet microfluidic approach is thus highly efficient and cost-effective for molecular evolution approaches and will find wide application in molecular evolution technologies, including mRNA display, phage display, and so on. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin
2018-03-01
Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Wurmb, T E; Quaisser, C; Balling, H; Kredel, M; Muellenbach, R; Kenn, W; Roewer, N; Brederlau, J
2011-04-01
Whole-body multislice helical CT becomes increasingly important as a diagnostic tool in patients with multiple injuries. Time gain in multiple-trauma patients who require emergency surgery might improve outcome. The authors hypothesised that whole-body multislice computed tomography (MSCT) (MSCT trauma protocol) as the initial diagnostic tool reduces the interval to start emergency surgery (tOR) if compared to conventional radiography, combined with abdominal ultrasound and organ-focused CT (conventional trauma protocol). The second goal of the study was to investigate whether the diagnostic approach chosen has an impact on outcome. The authors' level 1 trauma centre uses whole-body MSCT for initial radiological diagnostic work-up for patients with suspected multiple trauma. Before the introduction of MSCT in 2004, a conventional approach was used. Group I: data of trauma patients treated with conventional trauma protocol from 2001 to 2003. Group II: data from trauma patients treated with whole-body MSCT trauma protocol from 2004 to 2006. tOR in group I (n=155) was 120 (90-150) min (median and IQR) and 105 (85-133) min (median and IQR) in group II (n=163), respectively (p<0.05). Patients of group II had significantly more serious injuries. No difference in outcome data was found. 14 patients died in both groups within the first 30 days; five of these died within the first 24 h. A whole-body MSCT-based diagnostic approach to multiple trauma shortens the time interval to start emergency surgery in patients with multiple injuries. Mortality remained unchanged in both groups. Patients of group II were more seriously injured; an improvement of outcome might be assumed.
USDA-ARS?s Scientific Manuscript database
Soil erosion is a serious problem in the Ethiopian highlands. Conventional erosion control approaches have generally been ineffective in halting this problem. The presented study measured precipitation, sediment yield and stream flow in 2013 and 2014 in the Ene-Chilala subwatershed of the Birr River...
The Task Is Not Enough: Processing Approaches to Task-Based Performance
ERIC Educational Resources Information Center
Skehan, Peter; Xiaoyue, Bei; Qian, Li; Wang, Zhan
2012-01-01
This article reports on three research studies, all of which concern second language task performance. The first focuses on planning, and compares on-line and strategic planning as well as task repetition. The second study examines the role of familiarity on task performance, and compares this with conventional strategic planning. The third study…
Post-targeting strategy for ready-to-use targeted nanodelivery post cargo loading.
Zhu, J Y; Hu, J J; Zhang, M K; Yu, W Y; Zheng, D W; Wang, X Q; Feng, J; Zhang, X Z
2017-12-14
Based on boronate formation, this study reports a post-targeting methodology capable of readily installing versatile targeting modules onto a cargo-loaded nanoplatform in aqueous mediums. This permits the targeted nanodelivery of broad-spectrum therapeutics (drug/gene) in a ready-to-use manner while overcoming the PEGylation-dilemma that frequently occurs in conventional targeting approaches.
Knowledge-Base Semantic Gap Analysis for the Vulnerability Detection
NASA Astrophysics Data System (ADS)
Wu, Raymond; Seki, Keisuke; Sakamoto, Ryusuke; Hisada, Masayuki
Web security became an alert in internet computing. To cope with ever-rising security complexity, semantic analysis is proposed to fill-in the gap that the current approaches fail to commit. Conventional methods limit their focus to the physical source codes instead of the abstraction of semantics. It bypasses new types of vulnerability and causes tremendous business loss.
An Inquiry-Based Chemistry Laboratory Promoting Student Discovery of Gas Laws
ERIC Educational Resources Information Center
Bopegedera, A. M. R. P.
2007-01-01
Gas laws are taught in most undergraduate general chemistry courses and even in some high school chemistry courses. This article describes the author's experience of using the laboratory to allow students to "discover" gas laws instead of the conventional approach of using the lecture to teach this concept. Students collected data using Vernier…
Child Rights and Quality Education: Child-Friendly Schools in Central and Eastern Europe (CEE)
ERIC Educational Resources Information Center
Clair, Nancy; Miske, Shirley; Patel, Deepa
2012-01-01
Since the breakup of the Soviet Union and former Yugoslavia, Central and Eastern European (CEE) countries have engaged in education reforms based on international frameworks. One of these, the Child-Friendly Schools (CFS) approach, is distinctively grounded in the Convention on the Rights of the Child (CRC). CFS standards are comprehensive,…
ERIC Educational Resources Information Center
DeStefano, Joseph; Moore, Audrey-Marie Schuh; Balwanz, David; Hartwell, Ash
2006-01-01
This issues brief describes how complementary education approaches that rely on community, nongovernmental, and ministry collaboration present a promising response to the challenge to the limitations of conventional primary schooling. The brief is based on nine case studies of successful complementary education programs in Afghanistan, Bangladesh,…
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
models exploit the immediate past only. To extract information from further back in the document’s history , I use trigger pairs as the basic information...9 2.2 Context-Free Estimation (Unigram) ...... .................... 12 2.3 Short-Term History (Conventional N-gram...12 2.4 Short-term Class History (Class-Based N-gram) ................... 14 2.5 Intermediate Distance ........ ........................... 16
Promoting Early Career Teacher Resilience: A Framework for Understanding and Acting
ERIC Educational Resources Information Center
Johnson, Bruce; Down, Barry; Le Cornu, Rosie; Peters, Judy; Sullivan, Anna; Pearce, Jane; Hunter, Janet
2014-01-01
In this paper, we undertake a brief review of the "conventional" research into the problems of early career teachers to create a juxtaposed position from which to launch an alternative approach based on resilience theory. We outline four reasons why a new contextualised, social theory of resilience has the potential to open up the field…
Flipping the Academy: Is Learning from outside the Classroom Turning the University Inside Out?
ERIC Educational Resources Information Center
Helyer, Ruth; Corkill, Helen
2015-01-01
This paper explores the idea that the variety of approaches to experiential learning, and the diversity of ways in which learning is accessed and facilitated, is contributing to the conventional world of the university being turned upside-down. Work-based and experiential learning acknowledge learning derived from outside the classroom; similarly,…
A Fuzzy-Based Prior Knowledge Diagnostic Model with Multiple Attribute Evaluation
ERIC Educational Resources Information Center
Lin, Yi-Chun; Huang, Yueh-Min
2013-01-01
Prior knowledge is a very important part of teaching and learning, as it affects how instructors and students interact with the learning materials. In general, tests are used to assess students' prior knowledge. Nevertheless, conventional testing approaches usually assign only an overall score to each student, and this may mean that students are…
Matters of Success: A Deliberative Polling Approach to the Study of Student Retention
ERIC Educational Resources Information Center
Brown, Tucker; Kenney, Matthew T.
2014-01-01
In this article, the authors discuss a recent study they carried out at a mid-sized state university that used a polling method called deliberative polling. This type of polling differs from conventional polling in that respondents are polled before and after a deliberative session in which they discuss issues based on pertinent and…
NASA Astrophysics Data System (ADS)
Rizvi, Imran; Bulin, Anne-Laure; Anbil, Sriram R.; Briars, Emma A.; Vecchio, Daniela; Celli, Jonathan P.; Broekgaarden, Mans; Hasan, Tayyaba
2017-02-01
Targeting the molecular and cellular cues that influence treatment resistance in tumors is critical to effectively treating unresponsive populations of stubborn disease. The informed design of mechanism-based combinations is emerging as increasingly important to targeting resistance and improving the efficacy of conventional treatments, while minimizing toxicity. Photodynamic therapy (PDT) has been shown to synergize with conventional agents and to overcome the evasion pathways that cause resistance. Increasing evidence shows that PDT-based combinations cooperate mechanistically with, and improve the therapeutic index of, traditional chemotherapies. These and other findings emphasize the importance of including PDT as part of comprehensive treatment plans for cancer, particularly in complex disease sites. Identifying effective combinations requires a multi-faceted approach that includes the development of bioengineered cancer models and corresponding image analysis tools. The molecular and phenotypic basis of verteporfin-mediated PDT-based enhancement of chemotherapeutic efficacy and predictability in complex 3D models for ovarian cancer will be presented.
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
NASA Astrophysics Data System (ADS)
Simpson, Mike; Ives, Matthew; Hall, Jim
2016-04-01
There is an increasing body of evidence in support of the use of nature based solutions as a strategy to mitigate drought. Restored or constructed wetlands, grasslands and in some cases forests have been used with success in numerous case studies. Such solutions remain underused in the UK, where they are not considered as part of long-term plans for supply by water companies. An important step is the translation of knowledge on the benefits of nature based solutions at the upland/catchment scale into a model of the impact of these solutions on national water resource planning in terms of financial costs, carbon benefits and robustness to drought. Our project, 'A National Scale Model of Green Infrastructure for Water Resources', addresses this issue through development of a model that can show the costs and benefits associated with a broad roll-out of nature based solutions for water supply. We have developed generalised models of both the hydrological effects of various classes and implementations of nature-based approaches and their economic impacts in terms of construction costs, running costs, time to maturity, land use and carbon benefits. Our next step will be to compare this work with our recent evaluation of conventional water infrastructure, allowing a case to be made in financial terms and in terms of security of water supply. By demonstrating the benefits of nature based solutions under multiple possible climate and population scenarios we aim to demonstrate the potential value of using nature based solutions as a component of future long-term water resource plans. Strategies for decision making regarding the selection of nature based and conventional approaches, developed through discussion with government and industry, will be applied to the final model. Our focus is on keeping our work relevant to the requirements of decision-makers involved in conventional water planning. We propose to present the outcomes of our model for the evaluation of nature-based solutions at catchment scale and ongoing results of our national-scale model.
Integrated Controls-Structures Design Methodology for Flexible Spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Price, D. B.
1995-01-01
This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.
Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong
2011-01-01
Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.
A fully implicit finite element method for bidomain models of cardiac electromechanics
Dal, Hüsnü; Göktepe, Serdar; Kaliske, Michael; Kuhl, Ellen
2012-01-01
We propose a novel, monolithic, and unconditionally stable finite element algorithm for the bidomain-based approach to cardiac electromechanics. We introduce the transmembrane potential, the extracellular potential, and the displacement field as independent variables, and extend the common two-field bidomain formulation of electrophysiology to a three-field formulation of electromechanics. The intrinsic coupling arises from both excitation-induced contraction of cardiac cells and the deformation-induced generation of intra-cellular currents. The coupled reaction-diffusion equations of the electrical problem and the momentum balance of the mechanical problem are recast into their weak forms through a conventional isoparametric Galerkin approach. As a novel aspect, we propose a monolithic approach to solve the governing equations of excitation-contraction coupling in a fully coupled, implicit sense. We demonstrate the consistent linearization of the resulting set of non-linear residual equations. To assess the algorithmic performance, we illustrate characteristic features by means of representative three-dimensional initial-boundary value problems. The proposed algorithm may open new avenues to patient specific therapy design by circumventing stability and convergence issues inherent to conventional staggered solution schemes. PMID:23175588
Alternative approaches to conventional antiepileptic drugs in the management of paediatric epilepsy
Kneen, R; Appleton, R E
2006-01-01
Over the last two decades, there has been a rapid expansion in the number and types of available antiepileptic drugs (AEDs), but there is increasing concern amongst parents and carers about their unwanted side effects. Seizure control is achieved in approximately 75% of children treated with conventional AEDs, but non‐conventional (or non‐standard) medical treatments, surgical procedures, dietary approaches, and other non‐pharmacological treatment approaches may have a role to play in those with intractable seizures or AED toxicity. Many of the approaches are largely common sense and are already incorporated into our current practice, including, for example, avoidance techniques and lifestyle advice, while others require further investigation or appear to be impractical in children. PMID:17056869
A novel hybrid BCI speller based on the incorporation of SSVEP into the P300 paradigm
NASA Astrophysics Data System (ADS)
Yin, Erwei; Zhou, Zongtan; Jiang, Jun; Chen, Fanglin; Liu, Yadong; Hu, Dewen
2013-04-01
Objective. Although extensive studies have shown improvement in spelling accuracy, the conventional P300 speller often exhibits errors, which occur in almost the same row or column relative to the target. To address this issue, we propose a novel hybrid brain-computer interface (BCI) approach by incorporating the steady-state visual evoked potential (SSVEP) into the conventional P300 paradigm. Approach. We designed a periodic stimuli mechanism and superimposed it onto the P300 stimuli to increase the difference between the symbols in the same row or column. Furthermore, we integrated the random flashings and periodic flickers to simultaneously evoke the P300 and SSVEP, respectively. Finally, we developed a hybrid detection mechanism based on the P300 and SSVEP in which the target symbols are detected by the fusion of three-dimensional, time-frequency features. Main results. The results obtained from 12 healthy subjects show that an online classification accuracy of 93.85% and information transfer rate of 56.44 bit/min were achieved using the proposed BCI speller in only a single trial. Specifically, 5 of the 12 subjects exhibited an information transfer rate of 63.56 bit/min with an accuracy of 100%. Significance. The pilot studies suggested that the proposed BCI speller could achieve a better and more stable system performance compared with the conventional P300 speller, and it is promising for achieving quick spelling in stimulus-driven BCI applications.
Spin-controlled ultrafast vertical-cavity surface-emitting lasers
NASA Astrophysics Data System (ADS)
Höpfner, Henning; Lindemann, Markus; Gerhardt, Nils C.; Hofmann, Martin R.
2014-05-01
Spin-controlled semiconductor lasers are highly attractive spintronic devices providing characteristics superior to their conventional purely charge-based counterparts. In particular, spin-controlled vertical-cavity surface emitting lasers (spin-VCSELs) promise to offer lower thresholds, enhanced emission intensity, spin amplification, full polarization control, chirp control and ultrafast dynamics. Most important, the ability to control and modulate the polarization state of the laser emission with extraordinarily high frequencies is very attractive for many applications like broadband optical communication and ultrafast optical switches. We present a novel concept for ultrafast spin-VCSELs which has the potential to overcome the conventional speed limitation for directly modulated lasers by the relaxation oscillation frequency and to reach modulation frequencies significantly above 100 GHz. The concept is based on the coupled spin-photon dynamics in birefringent micro-cavity lasers. By injecting spin-polarized carriers in the VCSEL, oscillations of the coupled spin-photon system can by induced which lead to oscillations of the polarization state of the laser emission. These oscillations are decoupled from conventional relaxation oscillations of the carrier-photon system and can be much faster than these. Utilizing these polarization oscillations is thus a very promising approach to develop ultrafast spin-VCSELs for high speed optical data communication in the near future. Different aspects of the spin and polarization dynamics, its connection to birefringence and bistability in the cavity, controlled switching of the oscillations, and the limitations of this novel approach will be analysed theoretically and experimentally for spin-polarized VCSELs at room temperature.
Winning, T; Townsend, G
2007-03-01
All Australian dental schools have introduced problem-based learning (PBL) approaches to their programmes over the past decade, although the nature of the innovations has varied from school to school. Before one can ask whether PBL is better than the conventional style of education, one needs to consider three key issues. Firstly, we need to agree on what is meant by the term PBL; secondly, we need to decide what "better" means when comparing educational approaches; and thirdly, we must look carefully at how PBL is implemented in given situations. It is argued that PBL fulfils, at least in theory, some important principles relating to the development of new knowledge. It also represents a change in focus from teachers and teaching in conventional programmes to learners and learning. Generally, students enjoy PBL programmes more than conventional programmes and feel they are more nurturing. There is also some evidence of an improvement in clinical and diagnostic reasoning ability associated with PBL curricula. The main negative points raised about PBL are the costs involved and mixed reports of insufficient grounding of students in the basic sciences. Financial restraints will probably preclude the introduction of pure or fully integrated PBL programmes in Australian dental schools. However, our research and experience, as well as other published literature, indicate that well-planned hybrid PBL programmes, with matching methods of assessment, can foster development of the types of knowledge, skills and attributes that oral health professionals will need in the future.
Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus
2013-10-01
Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
NASA Technical Reports Server (NTRS)
Ustinov, Eugene A.; Sunseri, Richard F.
2005-01-01
An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.
Oberbaum, Menachem; Gropp, Cornelius
2015-03-01
Beneficence is considered a core principle of medical ethics. Evidence Based Medicine (EBM) is used almost synonymously with beneficence and has become the gold standard of efficiency of conventional medicine. Conventional modern medicine and EBM in particular are based on what Heidegger called calculative thinking, whereas complementary medicine (CM) is often based on contemplative thinking according to Heidegger's distinction of different thinking processes. A central issue of beneficence is the striving for health and wellbeing. EBM is little concerned directly with wellbeing, though it does claim to aim at improving quality of life by correcting pathological processes and conditions like infectious diseases, ischemic heart disease but also hypertension and hyperlipidemia. On the other hand, wellbeing is central to therapeutic efforts of CM. Scientific methods to gauge results of EBM are quantitative and based on calculative thinking, while results of treatments with CM are expressed in a qualitative way and based on meditative thinking. In order to maximize beneficence it seems important and feasible to use both approaches, by combining EBM and CM in the best interest of the individual patient.
Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik
2016-09-01
In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.
Rafi Ahamed, Shaik
2016-01-01
In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924
Recent developments of nano-structured materials as the catalysts for oxygen reduction reaction
NASA Astrophysics Data System (ADS)
Kang, SungYeon; Kim, HuiJung; Chung, Yong-Ho
2018-04-01
Developments of high efficient materials for electrocatalyst are significant topics of numerous researches since a few decades. Recent global interests related with energy conversion and storage lead to the expansion of efforts to find cost-effective catalysts that can substitute conventional catalytic materials. Especially, in the field of fuel cell, novel materials for oxygen reduction reaction (ORR) have been noticed to overcome disadvantages of conventional platinum-based catalysts. Various approaching methods have been attempted to achieve low cost and high electrochemical activity comparable with Pt-based catalysts, including reducing Pt consumption by the formation of hybrid materials, Pt-based alloys, and not-Pt metal or carbon based materials. To enhance catalytic performance and stability, numerous methods such as structural modifications and complex formations with other functional materials are proposed, and they are basically based on well-defined and well-ordered catalytic active sites by exquisite control at nanoscale. In this review, we highlight the development of nano-structured catalytic materials for ORR based on recent findings, and discuss about an outlook for the direction of future researches.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Haj-Ali, Reem; Al Quran, Firas
2013-03-01
The purpose of this article is to describe the implementation of a team-based learning (TBL) approach in a removable denture prosthesis (RDP) module and present the results of students' performance in individual and group TBL activities and exam scores, students' experience with TBL and end of course evaluations, and faculty feedback. Course material at the College of Dentistry, University of Sharjah, United Arab Emirates, was transformed into seven conventional lectures and seven TBL sessions. Each TBL session consisted of pre-assigned reading (self-directed learning), in-class individual and group readiness tests (accountability), team problem-solving of patient RDP cases, and faculty-led class discussion (knowledge application). The course was assessed through scores from TBL session activities and course examinations, student satisfaction survey, and faculty feedback. Course grades were found to be higher using the TBL method then the traditional lecture-based method. Student evaluation data and faculty response indicated strong support for TBL as it was implemented in the course. The faculty noted a higher level of student engagement with team learning than in conventional class lecturing. TBL is an active-learning instructional strategy for courses with high student-to-faculty ratios. This approach provides regular feedback and the opportunity for students to develop higher reasoning skills.
Safety factor profiles from spectral motional Stark effect for ITER applications
NASA Astrophysics Data System (ADS)
Ko, Jinseok; Chung, Jinil; Wi, Han Min
2017-10-01
Depositions on the first mirror and multiple reflections on the other mirrors in the labyrinth of the optical system in the motional Stark effect (MSE) diagnostic for ITER are regarded as one of the main obstacles to overcome. One of the alternatives to the present-day conventional photoelastic-modulation-based MSE principles is the spectroscopic analyses on the motional Stark emissions where either the ratios among individual Stark multiplets or the amount of the Stark split are measured based on precise and accurate atomic data and models to ultimately provide the critical internal constraints in the magnetic equilibrium reconstruction. Equipped with the PEM-based conventional MSE hardware since 2015, the KSTAR MSE diagnostic system is capable of investigating the feasibility of the spectroscopic MSE approach particularly via comparative studies with the PEM approach. Available atomic data and models are used to analyze the beam emission spectra with a high-spectral-resolution spectrometer with a patent-pending dispersion calibration technology. Experimental validation on the atomic data and models is discussed in association with the effect of the existence of mirrors, the Faraday rotation in the relay optics media, and the background polarized light on the measured spectra. Work supported by the Ministry of Science, ICT and Future Planning, Korea.
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-01-01
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%. PMID:29039815
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-10-17
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng
2014-01-01
Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649
Power conversion distribution system using a resonant high-frequency AC link
NASA Technical Reports Server (NTRS)
Sood, P. K.; Lipo, T. A.
1986-01-01
Static power conversion systems based on a resonant high frequency (HF) link offers a significant reduction in the size and weight of the equipment over that achieved with conventional approaches, especially when multiple sources and loads are to be integrated. A faster system response and absence of audible noise are the other principal characteristics of such systems. A conversion configuration based on a HF link which is suitable for applications requiring distributed power is proposed.
Midfield wireless powering of subwavelength autonomous devices.
Kim, Sanghoek; Ho, John S; Poon, Ada S Y
2013-05-17
We obtain an analytical bound on the efficiency of wireless power transfer to a weakly coupled device. The optimal source is solved for a multilayer geometry in terms of a representation based on the field equivalence principle. The theory reveals that optimal power transfer exploits the properties of the midfield to achieve efficiencies far greater than conventional coil-based designs. As a physical realization of the source, we present a slot array structure whose performance closely approaches the theoretical bound.
Narcolepsy: current treatment options and future approaches
Billiard, Michel
2008-01-01
The management of narcolepsy is presently at a turning point. Three main avenues are considered in this review: 1) Two tendencies characterize the conventional treatment of narcolepsy. Modafinil has replaced methylphenidate and amphetamine as the first-line treatment of excessive daytime sleepiness (EDS) and sleep attacks, based on randomized, double blind, placebo-controlled clinical trials of modafinil, but on no direct comparison of modafinil versus traditional stimulants. For cataplexy, sleep paralysis, and hypnagogic hallucinations, new antidepressants tend to replace tricyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs) in spite of a lack of randomized, double blind, placebo-controlled clinical trials of these compounds; 2) The conventional treatment of narcolepsy is now challenged by sodium oxybate, the sodium salt of gammahydroxybutyrate, based on a series of randomized, double-blind, placebo-controlled clinical trials and a long-term open label study. This treatment has a fairly good efficacy and is active on all symptoms of narcolepsy. Careful titration up to an adequate level is essential both to obtain positive results and avoid adverse effects; 3) A series of new treatments are currently being tested, either in animal models or in humans, They include novel stimulant and anticataplectic drugs, endocrine therapy, and, more attractively, totally new approaches based on the present state of knowledge of the pathophysiology of narcolepsy with cataplexy, hypocretine-based therapies, and immunotherapy. PMID:18830438
Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo
2012-01-01
Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.
Interpreting the International Right to Health in a Human Rights-Based Approach to Health
2016-01-01
Abstract This article tracks the shifting place of the international right to health, and human rights-based approaches to health, in the scholarly literature and United Nations (UN). From 1993 to 1994, the focus began to move from the right to health toward human rights-based approaches to health, including human rights guidance adopted by UN agencies in relation to specific health issues. There is a compelling case for a human rights-based approach to health, but it runs the risk of playing down the right to health, as evidenced by an examination of some UN human rights guidance. The right to health has important and distinctive qualities that are not provided by other rights—consequently, playing down the right to health can diminish rights-based approaches to health, as well as the right to health itself. Because general comments, the reports of UN Special Rapporteurs, and UN agencies’ guidance are exercises in interpretation, I discuss methods of legal interpretation. I suggest that the International Covenant on Economic, Social and Cultural Rights permits distinctive interpretative methods within the boundaries established by the Vienna Convention on the Law of Treaties. I call for the right to health to be placed explicitly at the center of a rights-based approach and interpreted in accordance with public international law and international human rights law. PMID:28559680
Interpreting the International Right to Health in a Human Rights-Based Approach to Health.
Hunt, Paul
2016-12-01
This article tracks the shifting place of the international right to health, and human rights-based approaches to health, in the scholarly literature and United Nations (UN). From 1993 to 1994, the focus began to move from the right to health toward human rights-based approaches to health, including human rights guidance adopted by UN agencies in relation to specific health issues. There is a compelling case for a human rights-based approach to health, but it runs the risk of playing down the right to health, as evidenced by an examination of some UN human rights guidance. The right to health has important and distinctive qualities that are not provided by other rights-consequently, playing down the right to health can diminish rights-based approaches to health, as well as the right to health itself. Because general comments, the reports of UN Special Rapporteurs, and UN agencies' guidance are exercises in interpretation, I discuss methods of legal interpretation. I suggest that the International Covenant on Economic, Social and Cultural Rights permits distinctive interpretative methods within the boundaries established by the Vienna Convention on the Law of Treaties. I call for the right to health to be placed explicitly at the center of a rights-based approach and interpreted in accordance with public international law and international human rights law.
Lai, Yu-Ying; Shih, Ping-I; Li, Yi-Peng; Tsai, Che-En; Wu, Jhong-Sian; Cheng, Yen-Ju; Hsu, Chain-Shu
2013-06-12
Two new C60-based n-type materials, EGMC-OH and EGMC-COOH, functionalized with hydrophilic triethylene glycol groups (TEGs), have been synthesized and employed in conventional polymer solar cells. With the assistance of the TEG-based surfactant, EGMC-OH and EGMC-COOH can be dissolved in highly polar solvents to implement the polar/nonpolar orthogonal solvent strategy, forming an electron modification layer (EML) without eroding the underlying active layer. Multilayer conventional solar cells on the basis of ITO/PEDOT:PSS/P3HT:PC61BM/EML/Ca/Al configuration with the insertion of the EGMC-OH and EGMC-COOH EML between the active layer and the electrode have thus been successfully realized by cost-effective solution processing techniques. Moreover, the electron conductivity of the EML can be improved by incorporating alkali carbonates into the EGMC-COOH EML. Compared to the pristine device with a PCE of 3.61%, the devices modified by the Li2CO3-doped EGMC-COOH EML achieved a highest PCE of 4.29%. Furthermore, we demonstrated that the formation of the EGMC-COOH EML can be utilized as a general approach in the fabrication of highly efficient multilayer conventional devices. With the incorporation of the EGMC-COOH doped with 40 wt % Li2CO3, the PCDCTBT-C8:PC71BM-based device exhibited a superior PCE of 4.51%, which outperformed the corresponding nonmodified device with a PCE of 3.63%.
Effective dynamical coupling of hydrodynamics and transport for heavy-ion collisions
NASA Astrophysics Data System (ADS)
Oliinychenko, Dmytro; Petersen, Hannah
2017-04-01
Present hydrodynamics-based simulations of heavy-ion collisions neglect the feedback from the frozen-out particles flying back into the hydrodynamical region. This causes an artefact called “negative Cooper-Frye contributions”, which is negligible for high collision energies, but becomes significant for lower RHIC BES energies and for event-by-event simulations. To avoid negative Cooper-Frye contributions, while still preserving hydrodynamical behavior, we propose a pure hadronic transport approach with forced thermalization in the regions of high energy density. It is demonstrated that this approach exhibits enhancement of strangeness and mean transverse momenta compared to conventional transport - an effect typical for hydrodynamical approaches.
NASA Astrophysics Data System (ADS)
Sharan, A. M.; Sankar, S.; Sankar, T. S.
1982-08-01
A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
DG Planning with Amalgamation of Operational and Reliability Considerations
NASA Astrophysics Data System (ADS)
Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan
2016-04-01
Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.
Fuzzy logic modeling of high performance rechargeable batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, P.; Fennie, C. Jr.; Reisner, D.E.
1998-07-01
Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.
Understanding electron magnetic circular dichroism in a transition potential approach
NASA Astrophysics Data System (ADS)
Barthel, J.; Mayer, J.; Rusz, J.; Ho, P.-L.; Zhong, X. Y.; Lentzen, M.; Dunin-Borkowski, R. E.; Urban, K. W.; Brown, H. G.; Findlay, S. D.; Allen, L. J.
2018-04-01
This paper introduces an approach based on transition potentials for inelastic scattering to understand the underlying physics of electron magnetic circular dichroism (EMCD). The transition potentials are sufficiently localized to permit atomic-scale EMCD. Two-beam and three-beam systematic row cases are discussed in detail in terms of transition potentials for conventional transmission electron microscopy, and the basic symmetries which arise in the three-beam case are confirmed experimentally. Atomic-scale EMCD in scanning transmission electron microscopy (STEM), using both a standard STEM probe and vortex beams, is discussed.
A new modelling approach for zooplankton behaviour
NASA Astrophysics Data System (ADS)
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Majdani, Omid; Bartling, Soenke H; Leinung, Martin; Stöver, Timo; Lenarz, Minoo; Dullin, Christian; Lenarz, Thomas
2008-02-01
High-precision intraoperative navigation using high-resolution flat-panel volume computed tomography makes feasible the possibility of minimally invasive cochlear implant surgery, including cochleostomy. Conventional cochlear implant surgery is typically performed via mastoidectomy with facial recess to identify and avoid damage to vital anatomic landmarks. To accomplish this procedure via a minimally invasive approach--without performing mastoidectomy--in a precise fashion, image-guided technology is necessary. With such an approach, surgical time and expertise may be reduced, and hearing preservation may be improved. Flat-panel volume computed tomography was used to scan 4 human temporal bones. A drilling channel was planned preoperatively from the mastoid surface to the round window niche, providing a margin of safety to all functional important structures (e.g., facial nerve, chorda tympani, incus). Postoperatively, computed tomographic imaging and conventional surgical exploration of the drilled route to the cochlea were performed. All 4 specimens showed a cochleostomy located at the scala tympani anterior inferior to the round window. The chorda tympani was damaged in 1 specimen--this was preoperatively planned as a narrow facial recess was encountered. Using flat-panel volume computed tomography for image-guided surgical navigation, we were able to perform minimally invasive cochlear implant surgery defined as a narrow, single-channel mastoidotomy with cochleostomy. Although this finding is preliminary, it is technologically achievable.
Analysis and Synthesis of Memory-Based Fuzzy Sliding Mode Controllers.
Zhang, Jinhui; Lin, Yujuan; Feng, Gang
2015-12-01
This paper addresses the sliding mode control problem for a class of Takagi-Sugeno fuzzy systems with matched uncertainties. Different from the conventional memoryless sliding surface, a memory-based sliding surface is proposed which consists of not only the current state but also the delayed state. Both robust and adaptive fuzzy sliding mode controllers are designed based on the proposed memory-based sliding surface. It is shown that the sliding surface can be reached and the closed-loop control system is asymptotically stable. Furthermore, to reduce the chattering, some continuous sliding mode controllers are also presented. Finally, the ball and beam system is used to illustrate the advantages and effectiveness of the proposed approaches. It can be seen that, with the proposed control approaches, not only can the stability be guaranteed, but also its transient performance can be improved significantly.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., set the clock time to 3:23 and use the average power approach described in Section 5, Paragraph 5.3.2... conventional ranges, conventional cooking tops, conventional ovens, and microwave ovens at this time. However... finite period of time after the end of the heating function, where the end of the heating function is...
NASA Astrophysics Data System (ADS)
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
Refugees in Conflict: Creating a Bridge Between Traditional and Conventional Health Belief Models.
Ben-Arye, Eran; Bonucci, Massimo; Daher, Michel; Kebudi, Rejin; Saad, Bashar; Breitkreuz, Thomas; Rassouli, Maryam; Rossi, Elio; Gafer, Nahla; Nimri, Omar; Hablas, Mohamed; Kienle, Gunver Sophia; Samuels, Noah; Silbermann, Michael
2018-06-01
The recent wave of migration from Middle Eastern countries to Europe presents significant challenges to the European health profession. These include the inevitable communication gap created by differences in health care beliefs between European oncologists, health care practitioners, and refugee patients. This article presents the conclusions of a workshop attended by a group of clinicians and researchers affiliated with the Middle East Cancer Consortium, as well as four European-based health-related organizations. Workshop participants included leading clinicians and medical educators from the field of integrative medicine and supportive cancer care from Italy, Germany, Turkey, Israel, Palestine, Iran, Lebanon, Jordan, Egypt, and Sudan. The workshop illustrated the need for creating a dialogue between European health care professionals and the refugee population in order to overcome the communication barriers to create healing process. The affinity for complementary and traditional medicine (CTM) among many refugee populations was also addressed, directing participants to the mediating role that integrative medicine serves between CTM and conventional medicine health belief models. This is especially relevant to the use of herbal medicine among oncology patients, for whom an open and nonjudgmental (yet evidence-based) dialogue is of utmost importance. The workshop concluded with a recommendation for the creation of a comprehensive health care model, to include bio-psycho-social and cultural-spiritual elements, addressing both acute and chronic medical conditions. These models need to be codesigned by European and Middle Eastern clinicians and researchers, internalizing a culturally sensitive approach and ethical commitment to the refugee population, as well as indigenous groups originating from Middle Eastern and north African countries. European oncologists face a communication gap with refugee patients who have recently immigrated from Middle Eastern and northern African countries, with their different health belief models and affinity for traditional and herbal medicine. A culturally sensitive approach to care will foster doctor-refugee communication, through the integration of evidence-based medicine within a nonjudgmental, bio-psycho-social-cultural-spiritual agenda, addressing patients' expectation within a supportive and palliative care context. Integrative physicians, who are conventional doctors trained in traditional/complementary medicine, can mediate between conventional and traditional/herbal paradigms of care, facilitating doctor-patient communication through education and by providing clinical consultations within conventional oncology centers. © AlphaMed Press 2017.
NASA Technical Reports Server (NTRS)
Grantham, W. D.; Nguyen, L. T.; Deal, P. L.; Neubauer, M. J.; Smith, P. M.; Gregory, F. D.
1978-01-01
Conventional and powered lift concepts for supersonic approach and landing tasks are considered. Results indicated that the transport concepts had unacceptable low-speed handling qualities with no augmentation, and that in order to achieve satisfactory handling qualities, considerable augmentation was required. The available roll-control power was acceptable for the powered-lift concept.
Strange particles from NEXUS 3
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-01-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for strange particle production in proton-proton scattering at 158 GeV and 200 GeV centre-of-mass (cms) are discussed. This paper was presented at Strange Quark Matter Conference, Atlantic Beach, North Carolina, 12-17 March 2003.
Geometric correction of satellite data using curvilinear features and virtual control points
NASA Technical Reports Server (NTRS)
Algazi, V. R.; Ford, G. E.; Meyer, D. I.
1979-01-01
A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interactive image display system is employed.
Order from Chaos: An Arts-Based Approach to Counteract Trauma and Violence
ERIC Educational Resources Information Center
Kay, Lisa; Arnold, Alice
2014-01-01
In this visual essay, NAEA Women's Caucus members gather at the National Convention to create collages in an effort to brainstorm ways to end violence. The mission of the Women's Caucus of the National Art Education Association is a clear call for equity between men and women and a call for social justice in general. Discriminatory practices and…
On the Importance of a Genre-Based Approach in the Teaching of English for Medical Purposes
ERIC Educational Resources Information Center
León Pérez, Isabel K.; Martín-Martín, Pedro
2016-01-01
In experimental disciplinary fields such as medicine, the writing up of a research paper in English may represent a major hurdle, especially for inexperienced writers and users of EAL (English as an Additional Language), mainly due to a lack of familiarity with international discourse conventions. Despite the efforts of many EAP (English for…
Radiotherapy using a laser proton accelerator
NASA Astrophysics Data System (ADS)
Murakami, Masao; Hishikawa, Yoshio; Miyajima, Satoshi; Okazaki, Yoshiko; Sutherland, Kenneth L.; Abe, Mitsuyuki; Bulanov, Sergei V.; Daido, Hiroyuki; Esirkepov, Timur Zh.; Koga, James; Yamagiwa, Mitsuru; Tajima, Toshiki
2008-06-01
Laser acceleration promises innovation in particle beam therapy of cancer where an ultra-compact accelerator system for cancer beam therapy can become affordable to a broad range of patients. This is not feasible without the introduction of a technology that is radically different from the conventional accelerator-based approach. Because of its compactness and other novel characteristics, the laser acceleration method provides many enhanced capabilities
ERIC Educational Resources Information Center
Sisco, Melissa M.; Figueredo, Aurelio Jose
2008-01-01
Surveys and focus groups were administered to two samples of US university undergraduates to compare sexual aggression prevalence as assessed based on the Power-Assertion model (n = 139) versus the Confluence model (n = 318). Men were more likely to commit all illegal acts, especially conventional rape. Women also committed illegal acts,…
NASA Astrophysics Data System (ADS)
Vasco, D. W.
2018-04-01
Following an approach used in quantum dynamics, an exponential representation of the hydraulic head transforms the diffusion equation governing pressure propagation into an equivalent set of ordinary differential equations. Using a reservoir simulator to determine one set of dependent variables leaves a reduced set of equations for the path of a pressure transient. Unlike the current approach for computing the path of a transient, based on a high-frequency asymptotic solution, the trajectories resulting from this new formulation are valid for arbitrary spatial variations in aquifer properties. For a medium containing interfaces and layers with sharp boundaries, the trajectory mechanics approach produces paths that are compatible with travel time fields produced by a numerical simulator, while the asymptotic solution produces paths that bend too strongly into high permeability regions. The breakdown of the conventional asymptotic solution, due to the presence of sharp boundaries, has implications for model parameter sensitivity calculations and the solution of the inverse problem. For example, near an abrupt boundary, trajectories based on the asymptotic approach deviate significantly from regions of high sensitivity observed in numerical computations. In contrast, paths based on the new trajectory mechanics approach coincide with regions of maximum sensitivity to permeability changes.
NASA Astrophysics Data System (ADS)
Wang, Jingcheng; Luo, Jingrun
2018-04-01
Due to the extremely high particle volume fraction (greater than 85%) and damage feature of polymer bonded explosives (PBXs), conventional micromechanical methods lead to inaccurate estimates on their effective elastic properties. According to their manufacture characteristics, a multistep approach based on micromechanical methods is proposed. PBXs are treated as pseudo poly-crystal materials consisting of equivalent composite particles (explosive crystals with binder coating), rather than two-phase composites composed of explosive particles and binder matrix. Moduli of composite spheres are obtained by generalized self-consistent method first, and the self-consistent method is modified to calculate the effective moduli of PBX. Defects and particle size distribution are considered by Mori-Tanaka method. Results show that when the multistep approach is applied to PBX 9501, estimates are far more accurate than the conventional micromechanical results. The bulk modulus is 5.75% higher, and shear modulus is 5.78% lower than the experimental values. Further analyses discover that while particle volume fraction and the binder's property have significant influences on the effective moduli of PBX, the moduli of particles present minor influences. Investigation of another particle size distribution indicates that the use of more fine particles will enhance the effective moduli of PBX.
Hard-X-ray dark-field imaging using a grating interferometer.
Pfeiffer, F; Bech, M; Bunk, O; Kraft, P; Eikenberry, E F; Brönnimann, Ch; Grünzweig, C; David, C
2008-02-01
Imaging with visible light today uses numerous contrast mechanisms, including bright- and dark-field contrast, phase-contrast schemes and confocal and fluorescence-based methods. X-ray imaging, on the other hand, has only recently seen the development of an analogous variety of contrast modalities. Although X-ray phase-contrast imaging could successfully be implemented at a relatively early stage with several techniques, dark-field imaging, or more generally scattering-based imaging, with hard X-rays and good signal-to-noise ratio, in practice still remains a challenging task even at highly brilliant synchrotron sources. In this letter, we report a new approach on the basis of a grating interferometer that can efficiently yield dark-field scatter images of high quality, even with conventional X-ray tube sources. Because the image contrast is formed through the mechanism of small-angle scattering, it provides complementary and otherwise inaccessible structural information about the specimen at the micrometre and submicrometre length scale. Our approach is fully compatible with conventional transmission radiography and a recently developed hard-X-ray phase-contrast imaging scheme. Applications to X-ray medical imaging, industrial non-destructive testing and security screening are discussed.
Tracing the conformational changes in BSA using FRET with environmentally-sensitive squaraine probes
NASA Astrophysics Data System (ADS)
Govor, Iryna V.; Tatarets, Anatoliy L.; Obukhova, Olena M.; Terpetschnig, Ewald A.; Gellerman, Gary; Patsenker, Leonid D.
2016-06-01
A new potential method of detecting the conformational changes in hydrophobic proteins such as bovine serum albumin (BSA) is introduced. The method is based on the change in the Förster resonance energy transfer (FRET) efficiency between protein-sensitive fluorescent probes. As compared to conventional FRET based methods, in this new approach the donor and acceptor dyes are not covalently linked to protein molecules. Performance of the new method is demonstrated using the protein-sensitive squaraine probes Square-634 (donor) and Square-685 (acceptor) to detect the urea-induced conformational changes of BSA. The FRET efficiency between these probes can be considered a more sensitive parameter to trace protein unfolding as compared to the changes in fluorescence intensity of each of these probes. Addition of urea followed by BSA unfolding causes a noticeable decrease in the emission intensities of these probes (factor of 5.6 for Square-634 and 3.0 for Square-685), and the FRET efficiency changes by a factor of up to 17. Compared to the conventional method the new approach therefore demonstrates to be a more sensitive way to detect the conformational changes in BSA.
High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations
NASA Technical Reports Server (NTRS)
Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.
2017-01-01
To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1
Schemes of detecting nuclear spin correlations by dynamical decoupling based quantum sensing
NASA Astrophysics Data System (ADS)
Ma, Wen-Long Ma; Liu, Ren-Bao
Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical decoupling (DD) enhanced diamond quantum sensing has enabled NMR of single nuclear spins and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the frequency fingerprints of target nuclear spins. Such schemes, however, cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear spin clusters. Here we show that the first limitation can be overcome by using wavefunction fingerprints of target nuclear spins, which is much more sensitive than the ''frequency fingerprints'' to weak hyperfine interaction between the targets and a sensor, while the second one can be overcome by a new design of two-dimensional DD sequences composed of two sets of periodic DD sequences with different periods, which can be independently set to match two different transition frequencies. Our schemes not only offer an approach to breaking the resolution limit set by ''frequency gradients'' in conventional MRI, but also provide a standard approach to correlation spectroscopy for single-molecule NMR.
Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue
NASA Technical Reports Server (NTRS)
Ayache, S.; Haziza, M.; Cayrac, D.
1994-01-01
Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
Hybrid Visible Light and Ultrasound-Based Sensor for Distance Estimation
Rabadan, Jose; Guerra, Victor; Rodríguez, Rafael; Rufo, Julio; Luna-Rivera, Martin; Perez-Jimenez, Rafael
2017-01-01
Distance estimation plays an important role in location-based services, which has become very popular in recent years. In this paper, a new short range cricket sensor-based approach is proposed for indoor location applications. This solution uses Time Difference of Arrival (TDoA) between an optical and an ultrasound signal which are transmitted simultaneously, to estimate the distance from the base station to the mobile receiver. The measurement of the TDoA at the mobile receiver endpoint is proportional to the distance. The use of optical and ultrasound signals instead of the conventional radio wave signal makes the proposed approach suitable for environments with high levels of electromagnetic interference or where the propagation of radio frequencies is entirely restricted. Furthermore, unlike classical cricket systems, a double-way measurement procedure is introduced, allowing both the base station and mobile node to perform distance estimation simultaneously. PMID:28208584
Winrow, Benjamin; Bile, Khalif; Hafeez, Assad; Davies, Hugh; Brown, Nick; Zafar, Shamsa; Cham, Mamady; Phillips, Barbara; MacDonald, Rhona; Southall, David P
2012-05-01
For a multitude of eminently modifiable reasons, death rates for pregnant women and girls and their newborn infants in poorly resourced countries remain unacceptably high. The concomitant high morbidity rates compound the situation. The rights of these vulnerable individuals are incompletely protected by existing United Nations human rights conventions, which many countries have failed to implement. The authors propose a novel approach grounded on both human rights and robust evidence-based clinical guidelines to create a 'human rights convention specifically for pregnant women and girls and their newborn infants'. The approach targets the 'right to health' of these large, vulnerable and neglected populations. The proposed convention is designed so that it can be monitored, audited and evaluated objectively. It should also foster a sense of national ownership and accountability as it is designed to be relevant to local situations and to be incorporated into local clinical governance systems. It may be of particular value to those countries that are not yet on target to meet the Millennium Development Goals (MDGs), especially MDGs 4 and 5, which target child and maternal mortality, respectively. To foster a sense of international responsibility, two additional initiatives are integral to its philosophy: the promotion of twinning between well and poorly resourced regions and a raising of awareness of how some well-resourced countries can damage the health of mothers and babies, for example, through the recruitment of health workers trained by national governments and taken from the public health system.
Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M
2018-08-17
Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Haney, Michael W.
2015-12-01
The economies-of-scale and enhanced performance of integrated micro-technologies have repeatedly delivered disruptive market impact. Examples range from microelectronics to displays to lighting. However, integrated micro-scale technologies have yet to be applied in a transformational way to solar photovoltaic panels. The recently announced Micro-scale Optimized Solar-cell Arrays with Integrated Concentration (MOSAIC) program aims to create a new paradigm in solar photovoltaic panel technology based on the incorporation of micro-concentrating photo-voltaic (μ-CPV) cells. As depicted in Figure 1, MOSAIC will integrate arrays of micro-optical concentrating elements and micro-scale PV elements to achieve the same aggregated collection area and high conversion efficiency of a conventional (i.e., macro-scale) CPV approach, but with the low profile and mass, and hopefully cost, of a conventional non-concentrated PV panel. The reduced size and weight, and enhanced wiring complexity, of the MOSAIC approach provide the opportunity to access the high-performance/low-cost region between the conventional CPV and flat-plate (1-sun) PV domains shown in Figure 2. Accessing this portion of the graph in Figure 2 will expand the geographic and market reach of flat-plate PV. This talk reviews the motivation and goals for the MOSAIC program. The diversity of the technical approaches to micro-concentration, embedded solar tracking, and hybrid direct/diffuse solar resource collection found in the MOSAIC portfolio of projects will also be highlighted.
You, Joyce H S; Lui, Grace; Kam, Kai Man; Lee, Nelson L S
2015-04-01
We examined, from a Hong Kong healthcare providers' perspective, the cost-effectiveness of rapid diagnosis with Xpert in patients hospitalized for suspected active pulmonary tuberculosis (PTB). A decision tree was designed to simulate outcomes of three diagnostic assessment strategies in adult patients hospitalized for suspected active PTB: conventional approach, sputum smear plus Xpert for acid-fast bacilli (AFB) smear-negative, and a single sputum Xpert test. Model inputs were derived from the literature. Outcome measures were direct medical cost, one-year mortality rate, quality-adjusted life-years (QALYs) and incremental cost per QALY (ICER). In the base-case analysis, Xpert was more effective with higher QALYs gained and a lower mortality rate when compared with smear plus Xpert by an ICER of USD99. A conventional diagnostic approach was the least preferred option with the highest cost, lowest QALYs gained and highest mortality rate. Sensitivity analysis showed that Xpert would be the most cost-effective option if the sensitivity of sputum AFB smear microscopy was ≤74%. The probabilities of Xpert, smear plus Xpert and a conventional approach to be cost-effective were 94.5%, 5.5% and 0%, respectively, in 10,000 Monte Carlo simulations. The Xpert sputum test appears to be a highly cost-effective diagnostic strategy for patients with suspected active PTB in an intermediate burden area like Hong Kong. Copyright © 2015 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Efficient Online Learning Algorithms Based on LSTM Neural Networks.
Ergen, Tolga; Kozat, Suleyman Serdar
2017-09-13
We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.
Ren, Juan; Yu, Shiyan; Gao, Nan; Zou, Qingze
2013-11-01
In this paper, a control-based approach to replace the conventional method to achieve accurate indentation quantification is proposed for nanomechanical measurement of live cells using atomic force microscope. Accurate indentation quantification is central to probe-based nanomechanical property measurement. The conventional method for in-liquid nanomechanical measurement of live cells, however, fails to accurately quantify the indentation as effects of the relative probe acceleration and the hydrodynamic force are not addressed. As a result, significant errors and uncertainties are induced in the nanomechanical properties measured. In this paper, a control-based approach is proposed to account for these adverse effects by tracking the same excitation force profile on both a live cell and a hard reference sample through the use of an advanced control technique, and by quantifying the indentation from the difference of the cantilever base displacement in these two measurements. The proposed control-based approach not only eliminates the relative probe acceleration effect with no need to calibrate the parameters involved, but it also reduces the hydrodynamic force effect significantly when the force load rate becomes high. We further hypothesize that, by using the proposed control-based approach, the rate-dependent elastic modulus of live human epithelial cells under different stress conditions can be reliably quantified to predict the elasticity evolution of cell membranes, and hence can be used to predict cellular behaviors. By implementing the proposed approach, the elastic modulus of HeLa cells before and after the stress process were quantified as the force load rate was changed over three orders of magnitude from 0.1 to 100 Hz, where the amplitude of the applied force and the indentation were at 0.4-2 nN and 250-450 nm, respectively. The measured elastic modulus of HeLa cells showed a clear power-law dependence on the load rate, both before and after the stress process. Moreover, the elastic modulus of HeLa cells was substantially reduced by two to five times due to the stress process. Thus, our measurements demonstrate that the control-based protocol is effective in quantifying and characterizing the evolution of nanomechanical properties during the stress process of live cells.