Pichler, Renate; Tulchiner, Gennadi; Oberaigner, Wilhelm; Schaefer, Georg; Horninger, Wolfgang; Brunner, Andrea; Heidegger, Isabel
2017-10-01
We evaluated the diagnostic accuracy of urinary cytology (UCy) for detecting recurrence in the remnant urothelium (RRU) after radical cystectomy (RC) for urothelial cancer. We conducted a 10-year retrospective analysis of a prospectively collected, single-center RC database comprising 177 patients who had undergone follow-up examinations at our department with ≥ 1 available postoperative UCy specimen. UCy specimens were classified using the Papanicolaou scheme. In total, 957 cytology specimens were collected. Negative UCy results were noted in 927 (96.8%), atypical urothelial cells in 19 (2.0%), and suspicious/positive for malignancy in 11 (1.2%) cases. RRU was diagnosed in 16 patients (9.1%) during a mean follow-up period of 37 months (range, 1-118 months). The mean interval from RC to RRU was 34.7 months. Only 2 of 11 positive UCy specimens (18.2%) were falsely positive, for an overall sensitivity and specificity of 56.3% and 98.8% for predicting RRU, respectively. Urethral recurrence was diagnosed by UCy alone before the patients had developed symptoms in 8 of 12 cases (66.7%). Patients with clinical symptoms at the diagnosis of RRU had poorer cancer-specific survival rates than those of asymptomatic patients, although this trend was not statistically significant (P = .496). Moreover, positive UCy findings were associated with significantly lower overall survival (P < .001) and cancer-specific survival (P = .04) compared with negative UCy findings. Our results underline the predictive value of UCy in the surveillance of the remnant urothelium, with early detection of urethral recurrence before the development of clinical symptoms. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Ahonkhai, Aimalohi A; Adeola, Juliet; Banigbe, Bolanle; Onwuatuelo, Ifeyinwa; Adegoke, Abdulkabir B; Bassett, Ingrid V; Losina, Elena; Freedberg, Kenneth A; Okonkwo, Prosper; Regan, Susan
The authors conducted a retrospective cohort study of unplanned care interruption (UCI) among adults initiating antiretroviral therapy (ART) from 2009 to 2011 in a Nigerian clinic. The authors used repeated measures regression to model the impact of UCI on CD4 count upon return to care and rate of CD4 change on ART. Among 2496 patients, 83% had 0, 15% had 1, and 2% had ≥2 UCIs. Mean baseline CD4 for those with 0, 1, or ≥2 UCIs was 228/cells/mm 3 , 355/cells/mm 3 , and 392/cells/mm 3 ( P < .0001), respectively. The UCI was associated with a 62 CD4 cells/mm 3 decrease (95% confidence interval [CI]: -78 to -45) at next measurement. In months 1 to 6 on ART, patients with 0 UCI gained 10 cells/µL/mo (95% CI: 7-4). Those with 1 and ≥2 UCIs lost 2 and 5 cells/µL/mo (95% CI: -18 to 13 and -26 to 16). Patients with UCI did not recover from early CD4 losses associated with UCI. Preventing UCI is critical to maximize benefits of ART.
[Impacts of urban cooling effect based on landscape scale: a review].
Yu, Zhao-wu; Guo, Qing-hai; Sun, Ran-hao
2015-02-01
The urban cooling island (UCI) effect is put forward in comparison with the urban heat island effect, and emphasizes on landscape planning for optimization of function and way of urban thermal environment. In this paper, we summarized current research of the UCI effects of waters, green space, and urban park from the perspective of patch area, landscape index, threshold value, landscape pattern and correlation analyses. Great controversy was found on which of the two factors patch area and shape index has a more significant impact, the quantification of UCI threshold is particularly lacking, and attention was paid too much on the UCI effect of landscape composition but little on that of landscape configuration. More attention should be paid on shape, width and location for water landscape, and on the type of green space, green area, configuration and management for green space landscape. The altitude of urban park and human activities could also influence UCI effect. In the future, the threshold determination should dominate the research of UCI effect, the reasons of controversy should be further explored, the study of time sequence should be strengthened, the UCI effects from landscape pattern and landscape configuration should be identified, and more attention should be paid to spatial scale and resolution for the precision and accuracy of the UCI results. Also, synthesizing the multidisciplinary research should be taken into consideration.
AmeriFlux CA-NS6 UCI-1989 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS6 UCI-1989 burn site. Site Description - The UCI-1989 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS2 UCI-1930 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS2 UCI-1930 burn site. Site Description - The UCI-1930 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS3 UCI-1964 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS3 UCI-1964 burn site. Site Description - The UCI-1964 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS7 UCI-1998 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS7 UCI-1998 burn site. Site Description - The UCI-1998 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS8 UCI-2003 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS8 UCI-2003 burn site. Site Description - The UCI-2003 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS5 UCI-1981 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS5 UCI-1981 burn site. Site Description - The UCI-1981 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS4 UCI-1964 burn site wet
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS4 UCI-1964 burn site wet. Site Description - The UCI-1964 wet site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
AmeriFlux CA-NS1 UCI-1850 burn site
Goulden, Mike [University of California - Irvine
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site CA-NS1 UCI-1850 burn site. Site Description - The UCI-1850 site is located in a continental boreal forest, dominated by black spruce trees, within the BOREAS northern study area in central Manitoba, Canada. The site is a member of a chronological series of sites that are representative secondary succession growth stages after large stand replacement fires. Black spruce trees undergo a slow growth process enabling the accurate determination of the chronosequence of stand age disturbance. Additionally, boreal forests make up approximately 25% of forest ecosystems on earth. With both of these in mind, the UCI sites provide an excellent location to study the CO2 exchange between the atmosphere and boreal forest ecosystems as a function of sequential wildfires.
NASA Astrophysics Data System (ADS)
Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.
2016-12-01
The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.
Rattner, Barbara P
2012-04-01
With the goal of discussing how epigenetic control and chromatin remodeling contribute to the various processes that lead to cellular plasticity and disease, this symposium marks the collaboration between the Institut National de la Santé et de la Recherche Médicale (INSERM) in France and the University of California, Irvine (UCI). Organized by Paolo Sassone-Corsi (UCI) and held at the Beckman Center of the National Academy of Sciences at the UCI campus December 15-16, 2011, this was the first of a series of international conferences on epigenetics dedicated to the scientific community in Southern California. The meeting also served as the official kick off for the newly formed Center for Epigenetics and Metabolism at the School of Medicine, UCI (http://cem.igb.uci.edu).
Discussing epigenetics in Southern California
2012-01-01
With the goal of discussing how epigenetic control and chromatin remodeling contribute to the various processes that lead to cellular plasticity and disease, this symposium marks the collaboration between the Institut National de la Santé et de la Recherche Médicale (INSERM) in France and the University of California, Irvine (UCI). Organized by Paolo Sassone-Corsi (UCI) and held at the Beckman Center of the National Academy of Sciences at the UCI campus December 15–16, 2011, this was the first of a series of international conferences on epigenetics dedicated to the scientific community in Southern California. The meeting also served as the official kick off for the newly formed Center for Epigenetics and Metabolism at the School of Medicine, UCI (http://cem.igb.uci.edu). PMID:22414797
SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
2013-12-18
During the first year of the SciDAC gyrokinetic particle simulation (GPS) project, the GPS team (Zhihong Lin, Liu Chen, Yasutaro Nishimura, and Igor Holod) at the University of California, Irvine (UCI) studied the tokamak electron transport driven by electron temperature gradient (ETG) turbulence, and by trapped electron mode (TEM) turbulence and ion temperature gradient (ITG) turbulence with kinetic electron effects, extended our studies of ITG turbulence spreading to core-edge coupling. We have developed and optimized an elliptic solver using finite element method (FEM), which enables the implementation of advanced kinetic electron models (split-weight scheme and hybrid model) in the SciDACmore » GPS production code GTC. The GTC code has been ported and optimized on both scalar and vector parallel computer architectures, and is being transformed into objected-oriented style to facilitate collaborative code development. During this period, the UCI team members presented 11 invited talks at major national and international conferences, published 22 papers in peer-reviewed journals and 10 papers in conference proceedings. The UCI hosted the annual SciDAC Workshop on Plasma Turbulence sponsored by the GPS Center, 2005-2007. The workshop was attended by about fifties US and foreign researchers and financially sponsored several gradual students from MIT, Princeton University, Germany, Switzerland, and Finland. A new SciDAC postdoc, Igor Holod, has arrived at UCI to initiate global particle simulation of magnetohydrodynamics turbulence driven by energetic particle modes. The PI, Z. Lin, has been promoted to the Associate Professor with tenure at UCI.« less
Universal Child Immunization by 1990.
ERIC Educational Resources Information Center
Mandl, P. E., Ed.
1985-01-01
The present volume endeavors to highlight the deeper significance and broader implications for development theory, policy and practice of the realization of the movement toward universal child immunization by 1990 (UCI-1990). Simultaneously, the volume collects and analyzes the most significant findings and experiences of the movement since 1984.…
Astronomy Outreach Activites through the University of California, Irvine
NASA Astrophysics Data System (ADS)
Thornton, Carol E.; Smecker-Hane, T.
2006-06-01
We discuss our efforts to bring astronomy to local schools and classrooms through the UCI Astronomy Outreach program. This is part of a faculty-led outreach program entitled Outreach in Astronomy & Astrophysics with the UCI Observatory, funded by an NSF FOCUS grant to the University of California, Irvine. We primarily schedule visits with K-12 teachers in the Compton, Newport/Mesa and Santa Ana Unified School Districts, but often see scout troops and classes from other nearby schools. Often these schools don’t have the funding needed to bring their students to us, so we take small, portable telescopes to the schools, for both day and night visits, to give the students a chance to not only see a telescope, but to use one as well. For the schools that can find transportation to bring their students to campus, we include a tour of our observatory dome housing a 24-inch telescope used for outreach events and undergraduate research. In addition, we give interactive lectures and demonstrations to involve the students and get them excited about careers in science and science in general. We find that we help stimulate discussions before and after our visits, which can often help start or end a unit of astronomy within the schools’ curricula. We show feedback from teachers we have visited including the strengths of the program and suggestions/improvements for the future. For more information, see http://www.physics.uci.edu/%7Eobservat/tour_program.htmlFunding provided by NSF grant EHR-0227202 (PI: Ronald Stern).
Topolski, Francielle; de O Accorsi, Mauricio A; Trevisi, Hugo J; Cuoghi, Osmar A; Moresca, Ricardo
2016-10-01
To verify the influence of different bracket shapes and placement references according to Andrews and MBT systems on the expression of angulation in upper central incisors (UCI). Bracket positioning and mesiodistal dental movement simulations were performed and the angulations produced in the dental crown were evaluated, based on computed tomography scan images of 30 UCI and AutoCAD software analysis. Rectangular (Andrews) and rhomboid (MBT) brackets were placed according to the references recommended by Andrews and MBT systems - long axis of the clinical crown (LACC) and incisal edge (IE) respectively. Data showed that the use of LACC as reference for bracket positioning produced 5° and 4° UCI angulations in Andrews and MBT brackets respectively. The use of IE produced a 1.2° mean angulation in UCI for both brackets. When the LACC was used as reference for bracket positioning, the UCI crown angulation corresponded to the angulation built into the brackets, regardless of shape, while the use of IE resulted in natural crown angulation, regardless of bracket shape. This research contributes to guide the orthodontist in relation to the different treatment techniques based on the use of preadjusted brackets.
NASA Astrophysics Data System (ADS)
Pestana, Jill; Earthman, James
Discover Science Initiative (DSI) is an unprecedented success in the Southern Californian community by reaching out to over 5,000 participants through eight hands-on workshops on topics from fungi to the physics of light, and two large events in the past year. The DSI vision is to provide an avenue for University of California, Irvine (UCI) students and faculty from all departments to engage with the local community through workshops and presentations on interdisciplinary, state-of-the-art STEM research unique to UCI. DSI provides professional development opportunities for diverse students at UCI, while providing outreach at one of the most popular educational centers in Southern California, the Discovery Cube, which hosts over 400,000 guests each year. In DSI, students engage in peer-to-peer mentoring with guidance from the UCI School of Education in designing workshops, leading meetings, and managing teams. Also, students practice science communication, coached by certified communications trainers. Students involved in DSI learn important skills to complement their academic degrees, and stay motivated to pursue their career goals. Support for DSI is from Diverse Educational and Doctoral Experience (DECADE) at UCI.
Quantifying the relative contribution of an ecological reserve to conservation objectives
Aagaard, Kevin; Lyons, James E.; Thogmartin, Wayne E.
2017-01-01
Evaluating the role public lands play in meeting conservation goals is an essential step in good governance. We present a tool for comparing the regional contribution of each of a suite of wildlife management units to conservation goals. We use weighted summation (simple additive weighting) to compute a Unit Contribution Index (UCI) based on species richness, population abundance, and a conservation score based on IUCN Red List classified threat levels. We evaluate UCI for a subset of the 729 participating wetlands of the Integrated Waterbird Management and Monitoring (IWMM) Program across U.S. Fish and Wildlife Service Regions 3 (Midwest USA), 4 (Southeast USA), and 5 (Northeast USA). We found that the median across-Region UCI for Region 5 was greater than Regions 3 and 4, while Region 4 had the greatest within-Region UCI median. This index is a powerful tool for wildlife managers to evaluate the performance of units within the conservation estate.
ERIC Educational Resources Information Center
Bussmann, Jeffra Diane; Plovnick, Caitlin E.
2013-01-01
In 2008, University of California, Irvine (UCI) Libraries launched their first Find Science Information online tutorial. It was an innovative web-based tool, containing not only informative content but also interactive activities, embedded hyperlinked resources, and reflective quizzes, all designed primarily to educate undergraduate science…
NASA Astrophysics Data System (ADS)
Neu, J. L.; Prather, M. J.
2011-08-01
Uptake and removal of soluble trace gases and aerosols by precipitation represents a major uncertainty in the processes that control the vertical distribution of atmospheric trace species. Model representations of precipitation scavenging vary greatly in their complexity, and most are divorced from the physics of precipitation formation and transformation. Here, we describe a new large-scale precipitation scavenging algorithm, developed for the UCI chemistry-transport model (UCI-CTM), that represents a step toward a more physical treatment of scavenging through improvements in the formulation of the removal in sub-gridscale cloudy and ambient environments and their overlap within the column as well as ice phase uptake of soluble species. The UCI algorithm doubles the lifetime of HNO3 in the upper troposphere relative to a scheme with commonly made assumptions about cloud overlap and ice uptake, and provides better agreement with HNO3 observations. We find that the process of ice phase scavenging of HNO3 is a critical component of the tropospheric O3 budget, but that differences in the formulation of ice phase removal, while generating large relative differences in HNO3 abundance, have little impact on NOx and O3. The O3 budget is much more sensitive to the lifetime of HNO4, highlighting the need for better understanding of its interactions with ice and for additional observational constraints.
NASA Astrophysics Data System (ADS)
Neu, J. L.; Prather, M. J.
2012-04-01
Uptake and removal of soluble trace gases and aerosols by precipitation represents a major uncertainty in the processes that control the vertical distribution of atmospheric trace species. Model representations of precipitation scavenging vary greatly in their complexity, and most are divorced from the physics of precipitation formation and transformation. Here, we describe a new large-scale precipitation scavenging algorithm, developed for the UCI chemistry-transport model (UCI-CTM), that represents a step toward a more physical treatment of scavenging through improvements in the formulation of the removal in sub-gridscale cloudy and ambient environments and their overlap within the column as well as ice phase uptake of soluble species. The UCI algorithm doubles the lifetime of HNO3 in the upper troposphere relative to a scheme with commonly used fractional cloud cover assumptions and ice uptake determined by Henry's Law and provides better agreement with HNO3 observations. We find that the process of ice phase scavenging of HNO3 is a critical component of the tropospheric O3 budget, but that NOx and O3 mixing ratios are relatively insensitive to large differences in the removal rate. Ozone abundances are much more sensitive to the lifetime of HNO4, highlighting the need for better understanding of its interactions with ice and for additional observational constraints.
Jafari, Mahtab
2018-02-01
Within the coming decade, the demand for well-trained pharmacists is expected to only increase, especially with the aging of the United States (US) population. To help fill this growing demand, the University of California, Irvine (UCI) aims to offer a unique pre-pharmacy degree program and has developed a Bachelor of Science (BS) degree in Pharmaceutical Sciences to help achieve this goal. In this commentary, we share our experience with our curriculum and highlight its features in an effort to encourage other institutions to enhance the learning experience of their pre-pharmacy students. The efforts of the UCI Department of Pharmaceutical Sciences has resulted in UCI being consistently ranked as one of the top feeder institutions by the Pharmacy College Application Service (PharmCAS) in recent years. The UCI Pharmaceutical Sciences Bachelor of Science offers a unique pre-pharmacy educational experience in an effort to better prepare undergraduates for the rigors of the doctorate of pharmacy curriculum. Copyright © 2017. Published by Elsevier Inc.
Untangling the Tangled Webs We Weave: A Team Approach to Cyberspace.
ERIC Educational Resources Information Center
Broidy, Ellen; And Others
Working in a cooperative team environment across libraries and job classifications, librarians and support staff at the University of California at Irvine (UCI) have mounted several successful web projects, including two versions of the Libraries' home page, a virtual reference collection, and Science Library "ANTswer Machine." UCI's…
Prognosis and delay of diagnosis among Kaposi’s sarcoma patients in Uganda: a cross-sectional study
2014-01-01
Background In low- and middle-income countries, the association between delay to treatment and prognosis for Kaposi’s sarcoma (KS) patients is yet to be studied. Methods This is a prospective study of HIV-infected adults with histologically-confirmed KS treated at the Uganda Cancer Institute (UCI). Standardized interviews were conducted in English or Luganda. Medical records were abstracted for KS stage at admission to UCI. Multivariable logistic regression assessed relationships between diagnostic delay and stage at diagnosis. Results Of 161 patients (90% response rate), 69% were men, and the mean age was 34.0 years (SD 7.7). 26% had been seen in an HIV clinic within 3 months, 72% were on antiretroviral therapy, and 26% had visited a traditional healer prior to diagnosis. 45% delayed seeking care at UCI for ≥3 months from symptom onset. Among those who delayed, 36% waited 6 months, and 25% waited 12 months. Common reasons for delay were lack of pain (48%), no money (32%), and distance to UCI (8%). In adjusted analysis patients who experienced diagnostic delay were more likely than those who did not delay to have poor-risk KS stage (OR 3.41, p = 0.002, 95% CI: 1.46-7.45). In adjusted analyses visiting a traditional healer was the only variable associated with greater likelihood of delay (OR 2.69, p = 0.020, 95% CI: 1.17-6.17). Conclusions Diagnostic delay was associated with poor-risk stage at diagnosis, and visiting a traditional healer was associated with higher odds of delay. The relationship between traditional and Western medicine presents a critical intervention point to improve KS-related outcomes in Uganda. PMID:24904686
The Death of the Large Lecture Hall, the Rise of Peer-to-Peer Course Delivery?
ERIC Educational Resources Information Center
Navarro, Peter
2015-01-01
This article reports the results of a pilot project conducted at the University of California--Irvine (UCI) involving the simultaneous online delivery of a course to both University of California undergraduates and enrollees on the Coursera Massive Open Online Course (MOOC) platform. Survey results from a robust sampling of UCI undergraduates…
ERIC Educational Resources Information Center
Galligani, Dennis J.
This second volume of the University of California, Irvine (UCI), Student Affirmative Action (SAA) Five-Year Plan contains the complete student affirmative action plans as submitted by 33 academic and administrative units at UCI. The volume is organized by type of unit: academic units, academic retention units, outreach units, and student life…
Beyond Outreach: Expanding the UCI Astronomy Outreach Program to New Heights
NASA Astrophysics Data System (ADS)
Smecker-Hane, T. A.; Mauzy-Melitz, D. K.; Hood, M. A.
2010-08-01
The Astronomy Outreach Program at the University of California, Irvine (UCI) has three major components: (1) tours of the UCI Observatory and visits to local K-12 classrooms that bring hands-on activities and telescopes into the local schools, (2) an annual Teacher's Workshop in Astronomy & Astrophysics, and (3) Visitor Nights at the Observatory for the general public that include lectures on astrophysics topics and star gazing with our telescopes. Here we describe the results of our year long partnership with Grade 3-12 teachers to expand the tour and classroom visit portion of our program. We developed curricula and survey tools for Grades 3, 5, and high school that addresses specific California State Science Content Standards and amplify the impact of our outreach visits to their classrooms and their tours of the UCI Observatory. We describe the lessons and hands-on activities developed for the curricula, report on the results of pre- and post-testing of the students to judge how much they learned and whether or not their attitudes about science have changed, and report on teachers' responses to the program. Many of the lessons and activities we developed are available on our website.
Upper cervical injuries: Clinical results using a new treatment algorithm
Joaquim, Andrei F.; Ghizoni, Enrico; Tedeschi, Helder; Yacoub, Alexandre R. D.; Brodke, Darrel S.; Vaccaro, Alexander R.; Patel, Alpesh A.
2015-01-01
Introduction: Upper cervical injuries (UCI) have a wide range of radiological and clinical presentation due to the unique complex bony, ligamentous and vascular anatomy. We recently proposed a rational approach in an attempt to unify prior classification system and guide treatment. In this paper, we evaluate the clinical results of our algorithm for UCI treatment. Materials and Methods: A prospective cohort series of patients with UCI was performed. The primary outcome was the AIS. Surgical treatment was proposed based on our protocol: Ligamentous injuries (abnormal misalignment, facet perched or locked, increase atlanto-dens interval) were treated surgically. Bone fractures without ligamentous injuries were treated with a rigid cervical orthosis, with exception of fractures in the dens base with risk factors for non-union. Results: Twenty-three patients treated initially conservatively had some follow-up (mean of 171 days, range from 60 to 436 days). All of them were neurologically intact. None of the patients developed a new neurological deficit. Fifteen patients were initially surgically treated (mean of 140 days of follow-up, ranging from 60 to 270 days). In the surgical group, preoperatively, 11 (73.3%) patients were AIS E, 2 (13.3%) AIS C and 2 (13.3%) AIS D. At the final follow-up, the American Spine Injury Association (ASIA) score was: 13 (86.6%) AIS E and 2 (13.3%) AIS D. None of the patients had neurological worsening during the follow-up. Conclusions: This prospective cohort suggested that our UCI treatment algorithm can be safely used. Further prospective studies with longer follow-up are necessary to further establish its clinical validity and safety. PMID:25788816
2013-08-14
Communications and Computing, Electrical Engineering and Computer Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar...uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu...Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012
Lightness, chroma and hue differences on visual shade matching.
Pecho, Oscar E; Pérez, María M; Ghinea, Razvan; Della Bona, Alvaro
2016-11-01
To analyze the influence of lightness, chroma and hue differences on visual shade matching performed by dental students. 100 dental students (DS) volunteers with normal vision participated in the study. A spectroradiometer (SP) was used to measure the spectral reflectance of 4 extracted human upper central incisors (UCI 1-4) and shade tabs from Vita Classical (VC) and Vita Toothguide 3D-Master (3D) shade guides. Measurements were performed over a gray background, inside a viewing booth and under D65 illuminant (diffuse/0° geometry). Color parameters (L*, a*, b*, C* and h°) were calculated. DS used VC and 3D to visually select the best shade match for each UCI. CIE metric differences (Δa * ,Δb * ,ΔL ' , ΔC ' and ΔH ' ) and CIEDE2000(2:1:1) lightness (ΔE L ), chroma (ΔE C ) and hue (ΔE H ) differences were obtained from each UCI and the first three shades selected by DS and the first option using CIELAB, CIEDE2000(1:1:1) and CIEDE2000(2:1:1) color difference metrics. The closest CIELAB color-discrimination ellipsoid (from RIT-DuPont visual color-difference data) to each UCI was selected for the analysis of visual shade matching. DS showed a preference for shades with lower chroma (ΔC ' and ΔE C ) and/or hue (ΔH ' and ΔE H ) values instead of shades with lower lightness values (ΔL ' and ΔE L ). Most best visual matches were near the tolerance ellipsoid centered on tooth shade. This study is an attempt to partially explain the inconsistencies between visual and instrumental shade matching and the limitations of shade guides. Visual shade matching was driven by color differences with lower chroma and hue values. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
2013-06-16
Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar@uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with...Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu@microsoft.com, skakade@microsoft.com 1 a latent space dimensionality...Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012. [34] G.H. Golub and C.F. Van Loan. Matrix Computations. The
Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems
2017-05-24
932,742 Summary The project was started in this quarter. A kickoff meeting was held on September 26-27 in UCI, where the participants revised the...proposal text together in order to ensure a common understanding of the project content and goals. The document “2016-09-26_ONR_Kickoff_Meeting_UCI.pdf...accounted for in the project plan. Project Goals for this Quarter · Define and propose a concrete use case for the project . (Siemens) · Define the
Study: Third of Big Groundwater Basins in Distress
2015-06-16
Groundwater storage trends for Earth's 37 largest aquifers from UCI-led study using NASA GRACE data (2003-2013). Of these, 21 have exceeded sustainability tipping points and are being depleted, with 13 considered significantly distressed, threatening regional water security and resilience. http://photojournal.jpl.nasa.gov/catalog/PIA19685
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prather, Michael
This proposal seeks to maintain the DOE-ACME (offshoot of CESM) as one of the leading CCMs to evaluate near-term climate mitigation. It will implement, test, and optimize the new UCI photolysis codes within CESM CAM5 and new CAM versions in ACME. Fast-J is a high-order-accuracy (8 stream) code for calculating solar scattering and absorption in a single column atmosphere containing clouds, aerosols, and gases that was developed at UCI and implemented in CAM5 under the previous BER/SciDAC grant.
1986-11-26
cloning at the SalI site of pUCI8 vector DNA, iii) by treatment with EcoRl DNA methylase, ligation to EcoRI and cloning at the EcoRl site of pUCI8...cDNA to synthetic Sail linker 10 2.3.10 Treatment of DEN-2 cDNA with EcoRi methylase, followed 10 by ligation to EcoRI linkers and digestion with...picked by the mini plasmid preparation method as described in Maniatis et al. (1982). The procedure followed involved briefly treatment with a
López-Torrijo, Manuel; Mengual-Andrés, Santiago; Estellés-Ferrer, Remedios
2015-06-01
This article carries out a literature review of the advantages and limitations of the simultaneous bilateral cochlear implantation (SCI) compared to those of the sequential bilateral cochlear implantation (SBCI) and the unilateral cochlear implantation (UCI). The variables analysed in said comparison are: safety and surgical technique, SCI incidence, effectiveness, impact of the inter-implant interval, costs and financing, impact on brain plasticity, impact on speech and language development, main benefits, main disadvantages and concerns, and predictive factors of prognosis. Although the results are not conclusive, all variables analysed seem to point towards observable benefits of SCI in comparison with SBCI or UCI. This tendency should be studied in more depth in multicentre studies with higher methodological rigour, more comprehensive samples and periods and other determining variables (age at the time of implantation, duration and degree of the hearing loss, rehabilitation methodologies used, family involvement, etc.). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Aging as a predictor of nursing workload in Intensive Care Unit: results from a Brazilian Sample.
Ferretti-Rebustini, Renata Eloah de Lucena; Nogueira, Lilia de Souza; Silva, Rita de Cassia Gengo E; Poveda, Vanessa de Brito; Machado, Selma Pinheiro; Oliveira, Elaine Machado de; Andolhe, Rafaela; Padilha, Katia Grillo
2017-04-03
Verify if aging is an independent predictor of NW in ICU, according to age groups, and its predictive value as a determinant of NW in ICU. Study was conducted from 2012 to 2016. A convenience sample composed by patients (age ≥ 18) admitted to nine ICU belonging to a Brazilian hospital, was analyzed. Age was assumed as an independent variable and NW (measured by the Nursing Activities Score - NAS) as dependent. Linear regression model and ROC curve were used for the analysis. 890 participants (361 older people), mostly males (58.1%). The mean NAS score was higher among older participants in comparison to adults (p=0.004) but not within categories of aging (p=0.697). Age was responsible for 0.6% of NAS score. Each year of age increases NAS score in 0.081 points (p=0.015). However, age was not a good predictor of NAS score (AUC = 0.394; p=0.320). The care of older people in ICU is associated with an increase in NW, compared to adults. Aging can be considered an associated factor but not a good predictor of NW in ICU. Verificar si el envejecimiento es un predictor independiente de la Carga de Trabajo de Enfermería (CTE) en la Unidad de Cuidados Intensivos (UCI), según grupos etarios y su valor predictivo como determinante de la CTE en la UCI. Se analizó una muestra de conveniencia compuesta por pacientes (edad ≥ 18) ingresados en nueve UCI pertenecientes a un hospital brasileño. La edad se asumió como variable independiente y como variable dependiente la carga de trabajo de enfermería -medida por el sistema Nursing Activities Score (NAS) de puntuación de actividades de enfermería. Para el análisis, se utilizaron el modelo de regresión lineal y la curva ROC. 890 participantes (361 adultos mayores), en su mayoría varones (58,1%). La puntuación NAS promedio fue mayor entre los participantes adultos mayores en comparación con los adultos (p=0,004), pero no en las categorías de envejecimiento (p=0,697). La edad fue responsable del 0,6% de la puntuación NAS. Cada año de edad aumenta la puntuación NAS en 0,081 puntos (p=0,015). Sin embargo, la edad no resultó un buen predictor de la puntuación NAS (AbC=0,394; p=0,320). El cuidado de los adultos mayores en UCI se asocia con un aumento de la CTE en comparación con los adultos. El envejecimiento puede considerarse un factor asociado, pero no un buen predictor de la CTE en UCI.
Validation of Surrogates of Urine Osmolality in Population Studies.
Youhanna, Sonia; Bankir, Lise; Jungers, Paul; Porteous, David; Polasek, Ozren; Bochud, Murielle; Hayward, Caroline; Devuyst, Olivier
2017-01-01
The importance of vasopressin and/or urine concentration in various kidney, cardiovascular, and metabolic diseases has been emphasized recently. Due to technical constraints, urine osmolality (Uosm), a direct reflect of urinary concentrating activity, is rarely measured in epidemiologic studies. We analyzed 2 possible surrogates of Uosm in 4 large population-based cohorts (total n = 4,247) and in patients with chronic kidney disease (CKD, n = 146). An estimated Uosm (eUosm) based on the concentrations of sodium, potassium, and urea, and a urine concentrating index (UCI) based on the ratio of creatinine concentrations in urine and plasma were compared to the measured Uosm (mUosm). eUosm is an excellent surrogate of mUosm, with a highly significant linear relationship and values within 5% of mUosm (r = 0.99 or 0.98 in each population cohort). Bland-Altman plots show a good agreement between eUosm and mUosm with mean differences between the 2 variables within ±24 mmol/L. This was verified in men and women, in day and night urine samples, and in CKD patients. The relationship of UCI with mUosm is also significant but is not linear and exhibits more dispersed values. Moreover, the latter index is no longer representative of mUosm in patients with CKD as it declines much more quickly with declining glomerular filtration rate than mUosm. The eUosm is a valid marker of urine concentration in population-based and CKD cohorts. The UCI can provide an estimate of urine concentration when no other measurement is available, but should be used only in subjects with normal renal function. © 2017 S. Karger AG, Basel.
Buyego, Paul; Nakiyingi, Lydia; Ddungu, Henry; Walimbwa, Stephen; Nalwanga, Damalie; Reynolds, Steven J; Parkes-Ratanshi, Rosalind
2017-03-14
Early diagnosis of HIV associated lymphoma is challenging because the definitive diagnostic procedure of biopsy, requires skills and equipment that are not readily available. As a consequence, diagnosis may be delayed increasing the risk of mortality. We set out to determine the frequency and risk factors associated with the misdiagnosis of HIV associated lymphoma as tuberculosis (TB) among patients attending the Uganda Cancer Institute (UCI). A retrospective cohort study design was used among HIV patients with associated lymphoma patients attending the UCI, Kampala, Uganda between February and March 2015. Eligible patient charts were reviewed for information on TB treatment, socio-demographics, laboratory parameters (Hemoglobin, CD4cells count and lactate dehydrogenase) and clinical presentation using a semi structured data extraction form. A total of 183 charts were reviewed; 106/183 were males (57.9%), the median age was 35 (IQR, 28-45). Fifty six (30.6%) patients had a possible misdiagnosis as TB and their median time on TB treatment was 3.5 (1-5.3) months. In multivariate analysis the presence of chest pain had an odd ratio (OR) of 4.4 (95% CI 1.89-10.58, p < 0.001) and stage III and IV lymphoma disease had an OR of 3.22 (95% CI 1.08-9.63, p < 0.037) for possible misdiagnosis of lymphoma as TB. A high proportion of patients with HIV associated lymphoma attending UCI are misdiagnosed and treated as TB. Chest pain and stage III and IV of lymphoma were associated with an increased risk of a possible misdiagnosis of lymphoma as TB.
First Biogenic VOC Flux Results from the UCI Fluxtron Plant Chamber Facility
NASA Astrophysics Data System (ADS)
Seco, R.; Gu, D.; Joo, E.; Nagalingam, S.; Aristizabal, B. H.; Basu, C.; Kim, S.; Guenther, A. B.
2017-12-01
Atmospheric biogenic volatile organic compounds (BVOCs) have key environmental, ecological and biological roles, and can influence atmospheric chemistry, secondary aerosol formation, and regional climate. Quantifying BVOC emission rates and their impact on atmospheric chemistry is one of the greatest challenges with respect to predicting future air pollution in the context of a changing climate. A new facility, the UCI Fluxtron, has been developed at the Department of Earth System Science at the University of California Irvine to study the response of BVOC emissions to extreme weather and pollution stress. The UCI Fluxtron is designed for automated, continuous measurement of plant physiology and multi-modal BVOC chemical analysis from multiple plants. It consists of two controlled-environment walk-in growth chambers that contain several plant enclosures, a gas make-up system to precisely control the composition (e.g., H2O, CO2, O3 and VOC concentrations) of the air entering each enclosure. A sample manifold with automated inlet switching is used for measurements with in-situ and real-time VOC analysis instruments: H2O, CO2 fluxes can be measured continually with an infrared gas analyzer (IRGA) and BVOCs with a proton transfer reaction -time of flight- mass spectrometer (PTR-TOF-MS). Offline samples can also be taken via adsorbent cartridges to be analyzed in a thermal desorption gas chromatograph coupled to a TOF-MS detector. We present the first results of H2O, CO2 and BVOC fluxes, including the characterization and testing of the Fluxtron system. For example, measurements of young dragon tree (Paulownia elongata) individuals using whole-plant enclosures.
BioTextQuest: a web-based biomedical text mining suite for concept discovery.
Papanikolaou, Nikolas; Pafilis, Evangelos; Nikolaou, Stavros; Ouzounis, Christos A; Iliopoulos, Ioannis; Promponas, Vasilis J
2011-12-01
BioTextQuest combines automated discovery of significant terms in article clusters with structured knowledge annotation, via Named Entity Recognition services, offering interactive user-friendly visualization. A tag-cloud-based illustration of terms labeling each document cluster are semantically annotated according to the biological entity, and a list of document titles enable users to simultaneously compare terms and documents of each cluster, facilitating concept association and hypothesis generation. BioTextQuest allows customization of analysis parameters, e.g. clustering/stemming algorithms, exclusion of documents/significant terms, to better match the biological question addressed. http://biotextquest.biol.ucy.ac.cy vprobon@ucy.ac.cy; iliopj@med.uoc.gr Supplementary data are available at Bioinformatics online.
Dominant control of agriculture and irrigation on urban heat island in India.
Kumar, Rahul; Mishra, Vimal; Buzan, Jonathan; Kumar, Rohini; Shindell, Drew; Huber, Matthew
2017-10-25
As is true in many regions, India experiences surface Urban Heat Island (UHI) effect that is well understood, but the causes of the more recently discovered Urban Cool Island (UCI) effect remain poorly constrained. This raises questions about our fundamental understanding of the drivers of rural-urban environmental gradients and hinders development of effective strategies for mitigation and adaptation to projected heat stress increases in rapidly urbanizing India. Here we show that more than 60% of Indian urban areas are observed to experience a day-time UCI. We use satellite observations and the Community Land Model (CLM) to identify the impact of irrigation and prove for the first time that UCI is caused by lack of vegetation and moisture in non-urban areas relative to cities. In contrast, urban areas in extensively irrigated landscapes generally experience the expected positive UHI effect. At night, UHI warming intensifies, occurring across a majority (90%) of India's urban areas. The magnitude of rural-urban temperature contrasts is largely controlled by agriculture and moisture availability from irrigation, but further analysis of model results indicate an important role for atmospheric aerosols. Thus both land-use decisions and aerosols are important factors governing, modulating, and even reversing the expected urban-rural temperature gradients.
Group-Based Active Learning of Classification Models.
Luo, Zhipeng; Hauskrecht, Milos
2017-05-01
Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.
Expand and Contract: E-Learning Shapes the World in Cyprus and in California
ERIC Educational Resources Information Center
Sheley, Nancy Strow; Zitzer-Comfort, Carol
2011-01-01
In the spring of 2008, university students enrolled in courses at California State University, Long Beach (CSULB), and the University of Cyprus (UCY) participated in a cross-cultural e-learning project in which they studied American Indian literature and history. All students followed the same six-week syllabus, which included shared readings and…
Chemotherapy Use at the End of Life in Uganda
Merkel, Emily C.; Menon, Manoj; Lyman, Gary H.; Ddungu, Henry; Namukwaya, Elizabeth; Leng, Mhoira; Casper, Corey
2017-01-01
Purpose Avoiding chemotherapy during the last 30 days of life has become a goal of cancer care in the United States and Europe, yet end-of-life chemotherapy administration remains a common practice worldwide. The purpose of this study was to determine the frequency of and factors predicting end-of-life chemotherapy administration in Uganda. Methods Retrospective chart review and surveys and interviews of providers were performed at the Uganda Cancer Institute (UCI), the only comprehensive cancer center in the area, which serves a catchment area of greater than 100 million people. All adult patients at the UCI with reported cancer deaths between January 1, 2014, and August 31, 2015 were included. All UCI physicians were offered a survey, and a subset of physicians were also individually interviewed. Results Three hundred ninety-two patients (65.9%) received chemotherapy. Age less than 55 years (odds ratio [OR], 2.30; P = .004), a cancer diagnosis greater than 60 days before death (OR, 9.13; P < .001), and a presenting Eastern Cooperative Oncology Group performance status of 0 to 2 (OR, 2.47; P = .001) were associated with the administration of chemotherapy. More than 45% of patients received chemotherapy in the last 30 days of life. No clinical factors were predictive of chemotherapy use in the last 30 days of life, although doctors reported using performance status, cancer stage, and tumor chemotherapy sensitivity to determine when to administer chemotherapy. Patient expectations and a lack of outcomes data were important nonclinical factors influencing chemotherapy administration. Conclusion Chemotherapy is administered to a high proportion of patients with terminal cancer in Uganda, raising concern about efficacy. Late presentation of cancer in Uganda complicates end-of-life chemotherapy recommendations, necessitating guidelines specific to sub-Saharan Africa. PMID:29244988
An Introduction to Air Quality Modeling | Science Inventory ...
Empowering Sustainability is an initiative at the University of California, Irvine, dedicated to connecting sustainability leaders (fellows) across generations, countries, and disciplines through the exchange of ideas and experiences related to all aspects of sustainability, and fostering engagement and research on the ground through the collaboration among fellows and like-minded organizations worldwide. Launched in 2011, the UCI Summer Seminar Series "Empowering Sustainability on Earth," co-hosted each July by the UCI Newkirk Center for Science and Society, presents a series of seminars for members of the next generation of leaders of global sustainability from over 70 countries around the world. The seminar talks are open to the public. Presented at the Seventh Annual Session on Empowering Sustainability on Earth
Retrieving Ice Basal Motion Using the Hydrologically Coupled JPL/UCI Ice Sheet System Model (ISSM)
NASA Astrophysics Data System (ADS)
Khakbaz, B.; Morlighem, M.; Seroussi, H. L.; Larour, E. Y.
2011-12-01
The study of basal sliding in ice sheets requires coupling ice-flow models with subglacial water flow. In fact, subglacial hydrology models can be used to model basal water-pressure explicitly and to generate basal sliding velocities. This study addresses the addition of a thin-film-based subglacial hydrologic module to the Ice Sheet System Model (ISSM) developed by JPL in collaboration with the University of California Irvine (UCI). The subglacial hydrology model follows the study of J. Johnson (2002) who assumed a non-arborscent distributed drainage system in the form of a thin film beneath ice sheets. The differential equation that arises from conservation of mass in the water system is solved numerically with the finite element method in order to obtain the spatial distribution of basal water over the study domain. The resulting sheet water thickness is then used to model the basal water-pressure and subsequently the basal sliding velocity. In this study, an introduction and preliminary results of the subglacial water flow and basal sliding velocity will be presented for the Pine Island Glacier west Antarctica.This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Modeling, Analysis and Prediction (MAP) Program.
Arroyo, Pedro; Pardío-López, Jeanette; Loria, Alvar; Fernández-García, Victoria
2010-01-01
The objective of this article is to provide information on cooking techniques used by two rural communities of Yucatán. We used a 24-hour recall method with 275 participants consuming 763 dishes. Dishes were classified according to cooking technique: 205 were lard-fried (27%), 169 oil-fried (22%), and 389 boiled/grilled (51%). The smaller more secluded community (San Rafael) consumed more fried dishes than the larger community (Uci) (54% versus 45%) and used more lard-frying than Uci (65% versus 46%). The more extensive use of lard in the smaller community appears to be due to fewer modernizing influences such as the availability and use of industrialized vegetable oils. Copyright © Taylor & Francis Group, LLC
Chamorro, C; Romera, M A
2015-10-01
Pain and fear are still the most common memories that refer patients after ICU admission. Recently an important politician named the UCI as the branch of the hell. It is necessary to carry out profound changes in terms of direct relationships with patients and their relatives, as well as changes in environmental design and work and visit organization, to banish the vision that our society about the UCI. In a step which advocates for early mobilization of critical patients is necessary to improve analgesia and sedation strategies. The ICU is the best place for administering and monitoring analgesic drugs. The correct analgesia should not be a pending matter of the intensivist but a mandatory course. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.
NASA Technical Reports Server (NTRS)
Saltzman, Eric S.; DeBruyn, Warren J.
2000-01-01
This project involved the design and construction of a new instrument for airborne measurement of DMS and SO2. The instrument is intended for use on field missions to study the global atmospheric sulfur cycle. The ultimate scientific goal is to provide insight into the mechanisms of atmospheric transport and transformations impacting both natural and anthropogenic sulfur emissions. This report summarizes the progress made to date and the goals for future work on the project. The PI's for this project have recently relocated from the University of Miami to the University of California, Irvine, and a request has been made to transfer remaining funds to UCI. All equipment associated with this project has been transferred to UCI. The instrument design goal was to develop an instrument roughly one quarter the size and weight of currently available airborne instrumentation used for DMS and S02 measurements. Another goal was full automation, to allow unattended operation for the duration of a P-3 or DC-8 flight. The original performance design specifications for the instrument are given.
Comparison analysis for classification algorithm in data mining and the study of model use
NASA Astrophysics Data System (ADS)
Chen, Junde; Zhang, Defu
2018-04-01
As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.
Agriculture and irrigation as potential drivers of urban heat island
NASA Astrophysics Data System (ADS)
Kumar, R.; Buzan, J. R.; Mishra, V.; Kumar, R.; Shindell, D. T.; Huber, M.
2017-12-01
More than half the population are urban dwellers and are most vulnerable to global environmental changes. Urban extents are more prone to intense heating as compared to the surroundings rural area. Presently about 33% of India's population lives in the urban area and is expected to rise steeply, so a better understanding of the phenomenon affecting the urban population is very much important. Urban Heat Island (UHI) is a well-known phenomenon which potentially affects energy consumption, spreading of diseases and mortality. In general, almost all (90%) of the major urban area of the country faces UHI at night time in the range (1-5 °C) while 60% of the regions face Urban Cool Island (UCI) in the range of -1 to 6 °C in day time. Our observations and simulations show that vegetation and irrigation in the surrounding non urban directly affects day time Urban Cool Island effects. This is due to the relative cooling by vegetation and irrigated lands in the vicinity of these urban regions. There is a contrasting variation in UHI/UCI intensities in different seasons and in different time of the day. Most of the urban regions face UHI effect in summers whereas this phenomenton reverses in winters. Daytime UCI is more prominent in the months of April and May due to minimum availability of moisture. We observed that apart from vegetation and irrigation, aerosol is also an important factor governing UHI phenomenon.
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu
2018-06-01
Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.
A review of the PERSIANN family global satellite precipitation data products
NASA Astrophysics Data System (ADS)
Nguyen, P.; Ombadi, M.; Ashouri, H.; Thorstensen, A.; Hsu, K. L.; Braithwaite, D.; Sorooshian, S.; William, L.
2017-12-01
Precipitation is an integral part of the hydrologic cycle and plays an important role in the water and energy balance of the Earth. Careful and consistent observation of precipitation is important for several reasons. Over the last two decades, the PERSIANN system of precipitation products have been developed at the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine in collaboration with NASA, NOAA and the UNESCO G-WADI program. The PERSIANN family includes three main satellite-based precipitation estimation products namely PERSIANN, PERSIANN-CCS, and PERSIANN-CDR. They are accessible through several web-based interfaces maintained by CHRS to serve the needs of researchers, professionals and general public. These interfaces are CHRS iRain, Data Portal and RainSphere, which can be accessed at http://irain.eng.uci.edu, http://chrsdata.eng.uci.edu, and http://rainsphere.eng.uci.edu respectively and can be used for visualization, analysis or download of the data. The main objective of this presentation is to provide a concise and clear summary of the similarities and differences between the three products in terms of attributes and algorithm structure. Moreover, the presentation aims to provide an evaluation of the performance of the products over the Contiguous United States (CONUS) using Climate Prediction Center (CPC) precipitation dataset as a baseline of comparison. Also, an assessment of the behavior of PERSIANN family products over the globe (60°S - 60°N) is performed.
Influence of the internal anatomy on the leakage of root canals filled with thermoplastic technique.
Al-Jadaa, Anas; Attin, T; Peltomäki, T; Heumann, C; Schmidlin, P R; Paquè, F
2018-04-01
The aim of this paper is to evaluate the influence of the internal anatomy on the leakage of root canals filled with the thermoplastic technique. The upper central incisors (UCI) and mesial roots of the lower molars (MRLM) (n = 12 each) were tested regarding leakage using the gas-enhanced permeation test (GEPT) after root filling. The quality of the root fillings was assessed using micro-computed tomography (μCT) by superimposing scans before and after treatment to calculate unfilled volume. The calculated void volume was compared between the groups and correlated to the measured leakage values. Data were analyzed using t test and Pearson's correlation tests (p < 0.05). The mean void volume did not differ between UCI and MRLM (13.7 ± 6.2% vs. 14.2 ± 6.8%, respectively). However, significantly more leakage was evident in the MRLM (p < 0.001). While the leakage correlated highly to the void volume in the MRLM group (R 2 = 0.981, p < 0.001), no correlation was found in UCI (R 2 = 0.467, p = 0.126). MRLM showed higher leakage values, which correlated to the void volume in the root canal fillings. Care should always be taken while doing root canal treatments, but attention to teeth with known/expected complex root canal anatomy should be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-03-01
Development of this handbook began in 1982 at the request of the Radhealth Branch of the California Department of Health Services. California Assembly Bill 1513 directed the DHS to ''evaluate the technical and economic feasibility of (1) reducing the volume, reactivity, and chemical and radioactive hazard of (low-level radioactive) waste and (2) substituting nonradioactive or short-lived radioactive materials for those radionuclides which require long-term isolation from the environment. A contract awarded to the University of California at Irvine-UCI (California Std. Agreement 79902), to develop a document focusing on methods for decreasing low-level radioactive waste (LLW) generation in institutions was amore » result of that directive. In early 1985, the US Department of Energy, through EG and G Idaho, Inc., contracted with UCI to expand, update, and revise the California text for national release.« less
Violence: A non-chemical stressor and how it impacts human ...
Objectives of this presentation Define non-chemical stressors and provide overview of non-chemical stressors in a child’s social environment Summarize existing research on exposure to violence as a non-chemical stressor for children under 18 years of age Show that exposure to violence (a non-chemical stressor) may modify the biological response to chemical exposures To be presented at the Seventh Annual Session on Empowering Sustainability on Earth. Empowering Sustainability is an initiative at the University of California, Irvine, dedicated to connecting sustainability leaders across generations, countries, and disciplines and fostering engagement and research. Launched in 2011, the UCI Summer Seminar Series "Empowering Sustainability on Earth" is co-hosted each July by the UCI Newkirk Center for Science and Society, presenting a series of seminars for the next generation of leaders of global sustainability from over 70 countries around the world. The seminar talks are open to the public.
On the enterohepatic cycle of triiodothyronine in rats; importance of the intestinal microflora
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Herder, W.W.; Hazenberg, M.P.; Pennock-Schroeder, A.M.
1989-01-01
Until 70 h after a single iv injection of 10 uCi ({sup 125}I)triiodothyronine (T{sub 3}), normal rats excreted 15.8 {plus minus} 2.8% of the radioactivity with the feces and 17.5 {plus minus} 2.7% with the urine, while in intestine-decontaminated rats fecal and urinary excretion over this period amounted to 25.1 {plus minus} 7.2% and 23.6 {plus minus} 4.0% of administered radioactivity, respectively (mean {plus minus} SD, n=4). In fecal extracts of decontaminated rats 11.5 {plus minus} 6.8% of the excreted radioactivity consisted of T{sub 3} glucuronide (T{sub 3}G) and 10.9 {plus minus} 2.8% of T{sub 3} sulfate (T{sub 3}S), whereasmore » no conjugates were detected in feces from normal rats. Until 26 h after ig administration of 10 uCi ({sup 125}I)T{sub 3}, integrated radioactivity in blood of decontaminated rats was 1.5 times higher than that in normal rats. However, after ig administration of 10 uCi ({sup 125}I)T{sub 3}G or ({sup 125}I)T{sub 3}S, radioactivity in blood of decontaminated rats was 4.9- and 2.8-fold lower, respectively, than in normal rats. The radioactivity in the serum of control animals was composed of T{sub 3} and iodide in proportions independent of the tracer injected, while T{sub 3} conjugates represented <10 % of serum radioactivity. These results suggest an important role of the intestinal microflora in the enterohepatic circulation of T{sub 3} in rats.« less
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
Integration and Evaluation of Microscope Adapter for the Ultra-Compact Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Smith-Dryden, S. D.; Blaney, D. L.; Van Gorp, B.; Mouroulis, P.; Green, R. O.; Sellar, R. G.; Rodriguez, J.; Wilson, D.
2012-12-01
Petrologic, diagenetic, impact and weathering processes often happen at scales that are not observable from orbit. On Earth, one of the most common things that a scientist does when trying to understand detailed geologic history is to create a thin section of the rock and study the mineralogy and texture. Unfortunately, sample preparation and manipulation with advanced instrumentation may be a resource intensive proposition (e.g. time, power, complexity) in-situ. Getting detailed mineralogy and textural information without sample preparation is highly desirable. Visible to short wavelength microimaging spectroscopy has the potential to provide this information without sample preparation. Wavelengths between 500-2600 nm are sensitive to a wide range of minerals including mafic, carbonates, clays, and sulfates. The Ultra-Compact Imaging Spectrometer (UCIS) has been developed as a low mass (<2.0 kg), low power (~5.2 W) Offner spectrometer, ideal for use on Mars rover or other in-situ platforms. The UCIS instrument with its HgCdTe detector provides a spectral resolution of 10 nm with a range of 500-2600 nm, in addition to a 30 degree field of view and a 1.35 mrad instantaneous field of view. (Van Gorp et al. 2011). To explore applications of this technology for microscale investigations, an f/10 microimaging adapter has been designed and integrated to allow imaging of samples. The spatial coverage of the instrument is 2.56 cm with sampling of 67.5 microns (380 spatial pixels). Because the adapter is slow relative to the UCIS detector, strong sample illumination is required. Light from the lamp box was directed through optical fiber bundles, and directed onto the sample at a high angle of incidence to provide dark field imaging. For data collection, a mineral sample is mounted on the microscope adapter and scanned by the detector as it is moved horizontally via actuator. Data from the instrument is stored as a xyz cube end product with one spectral and two spatial dimensions. Measured spectra are then divided out by a white referenced spectrum of a Spectralon® calibration standard to show reflectance. For mineral samples larger than the UCIS field of view, mosaicking may be used from multiple scans. Scans of various rocks and minerals taken with the microscope adapter will be shown and results will be presented. References: Van Gorp et al., Optical design and performance of the Ultra-Compact Imaging Spectrometer, SPIE Optics and Photonics, San Diego, Aug 21-25, 2011. Acknowledgements: This work has been conducted at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. Work was carried out with JPL Research and Technology Development Funding.
Micro-Flow Studies in the 1 to 50 Micron Domain
2001-08-01
heating the samples in a torch was sufficient to restore them to their original condition. 18 2.1.1.2 Fabrication of Small (pm) Microchannels UCI was...SUMMARY 1 1.0 INTRODUCTION 1 1.1 Program Overview 1 1.2 Survey of the Literature 3 1.2.1 Flow in Rectangular Microchannel Ducts 3 1.2.2 Heat Transfer...in Microchannel Ducts 6 1.2.3 Other Micro-Flow Studies 8 2.0 STRAIGHT MICROCHANNEL FLOW STUDIES 9 2.1 Experimental Approach 9 2.1.1 Sample Fabrication
OAK-TREE : One-of-a-Kind Traffic Research and Education Experiment
DOT National Transportation Integrated Search
1997-01-01
The creation and progress of OAK-TREE (One-of-a-Kind Traffic Research and Education Experiment) are chronicled. OAK-TREE is a traffic educational laboratory experiment that was developed and conducted at the University of California at Irvine (UCI) d...
What time is it? Deep learning approaches for circadian rhythms.
Agostinelli, Forest; Ceglia, Nicholas; Shahbaba, Babak; Sassone-Corsi, Paolo; Baldi, Pierre
2016-06-15
Circadian rhythms date back to the origins of life, are found in virtually every species and every cell, and play fundamental roles in functions ranging from metabolism to cognition. Modern high-throughput technologies allow the measurement of concentrations of transcripts, metabolites and other species along the circadian cycle creating novel computational challenges and opportunities, including the problems of inferring whether a given species oscillate in circadian fashion or not, and inferring the time at which a set of measurements was taken. We first curate several large synthetic and biological time series datasets containing labels for both periodic and aperiodic signals. We then use deep learning methods to develop and train BIO_CYCLE, a system to robustly estimate which signals are periodic in high-throughput circadian experiments, producing estimates of amplitudes, periods, phases, as well as several statistical significance measures. Using the curated data, BIO_CYCLE is compared to other approaches and shown to achieve state-of-the-art performance across multiple metrics. We then use deep learning methods to develop and train BIO_CLOCK to robustly estimate the time at which a particular single-time-point transcriptomic experiment was carried. In most cases, BIO_CLOCK can reliably predict time, within approximately 1 h, using the expression levels of only a small number of core clock genes. BIO_CLOCK is shown to work reasonably well across tissue types, and often with only small degradation across conditions. BIO_CLOCK is used to annotate most mouse experiments found in the GEO database with an inferred time stamp. All data and software are publicly available on the CircadiOmics web portal: circadiomics.igb.uci.edu/ fagostin@uci.edu or pfbaldi@uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
What time is it? Deep learning approaches for circadian rhythms
Agostinelli, Forest; Ceglia, Nicholas; Shahbaba, Babak; Sassone-Corsi, Paolo; Baldi, Pierre
2016-01-01
Motivation: Circadian rhythms date back to the origins of life, are found in virtually every species and every cell, and play fundamental roles in functions ranging from metabolism to cognition. Modern high-throughput technologies allow the measurement of concentrations of transcripts, metabolites and other species along the circadian cycle creating novel computational challenges and opportunities, including the problems of inferring whether a given species oscillate in circadian fashion or not, and inferring the time at which a set of measurements was taken. Results: We first curate several large synthetic and biological time series datasets containing labels for both periodic and aperiodic signals. We then use deep learning methods to develop and train BIO_CYCLE, a system to robustly estimate which signals are periodic in high-throughput circadian experiments, producing estimates of amplitudes, periods, phases, as well as several statistical significance measures. Using the curated data, BIO_CYCLE is compared to other approaches and shown to achieve state-of-the-art performance across multiple metrics. We then use deep learning methods to develop and train BIO_CLOCK to robustly estimate the time at which a particular single-time-point transcriptomic experiment was carried. In most cases, BIO_CLOCK can reliably predict time, within approximately 1 h, using the expression levels of only a small number of core clock genes. BIO_CLOCK is shown to work reasonably well across tissue types, and often with only small degradation across conditions. BIO_CLOCK is used to annotate most mouse experiments found in the GEO database with an inferred time stamp. Availability and Implementation: All data and software are publicly available on the CircadiOmics web portal: circadiomics.igb.uci.edu/. Contacts: fagostin@uci.edu or pfbaldi@uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307647
Whole Air Sampling During NASA's March-April 1999 Pacific Exploratory Expedition (PEM-Tropics B)
NASA Technical Reports Server (NTRS)
Blake, Donald R.
2001-01-01
University of California, Irvine (UCI) collected more than 4500 samples whole air samples collected over the remote Pacific Ocean during NASA's Global Tropospheric Experiment (GTE) Pacific Exploratory Mission-Tropics B (PEM-Tropics B) in March and early April 1999. Approximately 140 samples during a typical 8-hour DC-8 flight, and 120 canisters for each 8-hour flight aboard the P-3B. These samples were obtained roughly every 3-7 min during horizontal flight legs and 1-3 min during vertical legs. The filled canisters were analyzed in the laboratory at UCI within ten days of collection. The mixing ratios of 58 trace gases comprising hydrocarbons, halocarbons, alkyl nitrates and DMS were reported (and archived) for each sample. Two identical analytical systems sharing the same standards were operated simultaneously around the clock to improve canister turn-around time and to keep our measurement precision optimal. This report presents a summary of the results for sample collected.
Discovering Structure in High-Dimensional Data Through Correlation Explanation
2014-12-08
transforming complex data into simpler, more meaningful forms goes under the rubric of representation learning [2] which shares many goals with...Zhivotovsky, and M.W. Feldman. Genetic structure of human populations. Science, 298(5602):2381–2385, 2002. [14] K. Bache and M. Lichman. UCI machine
Montassier, Emmanuel; Hardouin, Jean-Benoît; Segard, Julien; Batard, Eric; Potel, Gilles; Planchon, Bernard; Trochu, Jean-Noël; Pottier, Pierre
2016-04-01
An ECG is pivotal for the diagnosis of coronary heart disease. Previous studies have reported deficiencies in ECG interpretation skills that have been responsible for misdiagnosis. However, the optimal way to acquire ECG interpretation skills is still under discussion. Thus, our objective was to compare the effectiveness of e-learning and lecture-based courses for learning ECG interpretation skills in a large randomized study. We conducted a prospective, randomized, controlled, noninferiority study. Participants were recruited from among fifth-year medical students and were assigned to the e-learning group or the lecture-based group using a computer-generated random allocation sequence. The e-learning and lecture-based groups were compared on a score of effectiveness, comparing the 95% unilateral confidence interval (95% UCI) of the score of effectiveness with the mean effectiveness in the lecture-based group, adjusted for a noninferiority margin. Ninety-eight students were enrolled. As compared with the lecture-based course, e-learning was noninferior with regard to the postcourse test score (15.1; 95% UCI 14.2; +∞), which can be compared with 12.5 [the mean effectiveness in the lecture-based group (15.0) minus the noninferiority margin (2.5)]. Furthermore, there was a significant increase in the test score points in both the e-learning and lecture-based groups during the study period (both P<0.0001). Our randomized study showed that the e-learning course is an effective tool for the acquisition of ECG interpretation skills by medical students. These preliminary results should be confirmed with further multicenter studies before the implementation of e-learning courses for learning ECG interpretation skills during medical school.
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987
Center for Adaptive Optics | People
Astronomy Professor of Earth & Planetary Science imke at berkeley dot edu (510) 642.1947 Stanley Klein UC Irvine Aaron Barth Associate Professor Physics and Astronomy barth at uci dot edu (949) 824.3013 dot edu (310) 206.7853 Andrea Ghez Professor of Astronomy ghez at astro dot ucla dot edu (310
The Promise of Politics and Pedagogy in Derrida
ERIC Educational Resources Information Center
Peters, Michael A.
2006-01-01
In this article, the author profiles Jacques Derrida, whose teaching activity made an invaluable and indelible contribution to the intellectual life of the University of California, Irvine (UCI). The question of pedagogy is central for Derrida, not only in terms of teaching people to read and write differently, but as a means for appreciating the…
Measurements Conducted on an Unknown Object Labeled Pu-239
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoteling, Nathan
Measurements were carried out on 12 November 2013 to determine whether Pu-239 was present on an object discovered in a plastic bag with label “Pu-239 6 uCi.” Following initial survey measurements to verify that the object was not leaking or contaminated, spectra were collected with a High Purity Germanium (HPGe) detector with object positioned in two different configurations. Analysis of the spectra did not yield any direct evidence of Pu-239. From the measured spectra, minimum detectable activity (MDA) was determined to be approximately 2 uCi for the gamma-ray measurements. Although there was no direct evidence of Pu-239, a peak atmore » 60 keV characteristic of Am-241 decay was observed. Since it is very likely that Am-241 would be present in aged plutonium samples, this was interpreted as indirect evidence for the presence of plutonium on the object. Analysis of this peak led to an estimated Pu-239 activity of 0.02–0.04 uCi, or <1x10 -6 grams.« less
Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems
2017-05-24
Functional Modeling Compiler (SCCT) FM Compiler and Key Performance Indicators (KPI) May 2018 Pending. Model Management Backbone (SCCT) MMB Demonstration...implement the agent- based distributed runtime. - KPIs for single/multicore controllers and temporal/spatial domains. - Integration of the model management ...Distributed Runtime (UCI) Not started. Model Management Backbone (SCCT) Not started. Siemens Corporation Corporate Technology Unrestricted
ERIC Educational Resources Information Center
Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.
2012-01-01
Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…
ERIC Educational Resources Information Center
National Educational Computing Association, Eugene, OR.
This document contains the proceedings of the National Educational Computing Conference (NECC) 2001. The following research papers are included: "UCI Computer Arts: Building Gender Equity While Meeting ISTE NETS" (Kimberly Bisbee Burge); "From Mythology to Technology: Sisyphus Makes the Leap to Learn" (Patricia J. Donohue, Mary…
NASA Technical Reports Server (NTRS)
Mikic, Zoran; Grebowsky, Joseph (Technical Monitor)
2001-01-01
This report covers technical progress during the first quarter of the second year of NASA Sun-Earth Connections Theory Program (SECTP). SAIC and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model.
AVNM: A Voting based Novel Mathematical Rule for Image Classification.
Vidyarthi, Ankit; Mittal, Namita
2016-12-01
In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Estimating replicate time shifts using Gaussian process regression
Liu, Qiang; Andersen, Bogi; Smyth, Padhraic; Ihler, Alexander
2010-01-01
Motivation: Time-course gene expression datasets provide important insights into dynamic aspects of biological processes, such as circadian rhythms, cell cycle and organ development. In a typical microarray time-course experiment, measurements are obtained at each time point from multiple replicate samples. Accurately recovering the gene expression patterns from experimental observations is made challenging by both measurement noise and variation among replicates' rates of development. Prior work on this topic has focused on inference of expression patterns assuming that the replicate times are synchronized. We develop a statistical approach that simultaneously infers both (i) the underlying (hidden) expression profile for each gene, as well as (ii) the biological time for each individual replicate. Our approach is based on Gaussian process regression (GPR) combined with a probabilistic model that accounts for uncertainty about the biological development time of each replicate. Results: We apply GPR with uncertain measurement times to a microarray dataset of mRNA expression for the hair-growth cycle in mouse back skin, predicting both profile shapes and biological times for each replicate. The predicted time shifts show high consistency with independently obtained morphological estimates of relative development. We also show that the method systematically reduces prediction error on out-of-sample data, significantly reducing the mean squared error in a cross-validation study. Availability: Matlab code for GPR with uncertain time shifts is available at http://sli.ics.uci.edu/Code/GPRTimeshift/ Contact: ihler@ics.uci.edu PMID:20147305
Strategic hospital partnerships: improved access to care and increased epilepsy surgical volume.
Vadera, Sumeet; Chan, Alvin Y; Mnatsankanyan, Lilit; Sazgar, Mona; Sen-Gupta, Indranil; Lin, Jack; Hsu, Frank P K
2018-05-01
OBJECTIVE Surgical treatment of patients with medically refractory focal epilepsy is underutilized. Patients may lack access to surgically proficient centers. The University of California, Irvine (UCI) entered strategic partnerships with 2 epilepsy centers with limited surgical capabilities. A formal memorandum of understanding (MOU) was created to provide epilepsy surgery to patients from these centers. METHODS The authors analyzed UCI surgical and financial data associated with patients undergoing epilepsy surgery between September 2012 and June 2016, before and after institution of the MOU. Variables collected included the length of stay, patient age, seizure semiology, use of invasive monitoring, and site of surgery as well as the monthly number of single-surgery cases, complex cases (i.e., staged surgeries), and overall number of surgery cases. RESULTS Over the 46 months of the study, a total of 104 patients underwent a total of 200 operations; 71 operations were performed in 39 patients during the pre-MOU period (28 months) and 129 operations were performed in 200 patients during the post-MOU period (18 months). There was a significant difference in the use of invasive monitoring, the site of surgery, the final therapy, and the type of insurance. The number of single-surgery cases, complex-surgery cases, and the overall number of cases increased significantly. CONCLUSIONS Partnerships with outside epilepsy centers are a means to increase access to surgical care. These partnerships are likely reproducible, can be mutually beneficial to all centers involved, and ultimately improve patient access to care.
A Comparative Study with RapidMiner and WEKA Tools over some Classification Techniques for SMS Spam
NASA Astrophysics Data System (ADS)
Foozy, Cik Feresa Mohd; Ahmad, Rabiah; Faizal Abdollah, M. A.; Chai Wen, Chuah
2017-08-01
SMS Spamming is a serious attack that can manipulate the use of the SMS by spreading the advertisement in bulk. By sending the unwanted SMS that contain advertisement can make the users feeling disturb and this against the privacy of the mobile users. To overcome these issues, many studies have proposed to detect SMS Spam by using data mining tools. This paper will do a comparative study using five machine learning techniques such as Naïve Bayes, K-NN (K-Nearest Neighbour Algorithm), Decision Tree, Random Forest and Decision Stumps to observe the accuracy result between RapidMiner and WEKA for dataset SMS Spam UCI Machine Learning repository.
ERIC Educational Resources Information Center
Galligani, Dennis J.
This first volume of the University of California, Irvine, (UCI) Student Affirmative Action (SAA) Five-Year Plan provides an overview of the plan and the planning process, lists campus SAA goals and objectives, summarizes campus SAA activities, and describes the research and evaluation components of the plan. Topics include: the historical context…
NASA Technical Reports Server (NTRS)
Mikic, Zoran; Grebowsky, J. (Technical Monitor)
2000-01-01
This report details progress during the first quarter of the first year of our Sun-Earth Connections Theory Program (SECTP) contract. Science Applications International Corporation (SAIC) and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model.
Dataset from chemical gas sensor array in turbulent wind tunnel.
Fonollosa, Jordi; Rodríguez-Luján, Irene; Trincavelli, Marco; Huerta, Ramón
2015-06-01
The dataset includes the acquired time series of a chemical detection platform exposed to different gas conditions in a turbulent wind tunnel. The chemo-sensory elements were sampling directly the environment. In contrast to traditional approaches that include measurement chambers, open sampling systems are sensitive to dispersion mechanisms of gaseous chemical analytes, namely diffusion, turbulence, and advection, making the identification and monitoring of chemical substances more challenging. The sensing platform included 72 metal-oxide gas sensors that were positioned at 6 different locations of the wind tunnel. At each location, 10 distinct chemical gases were released in the wind tunnel, the sensors were evaluated at 5 different operating temperatures, and 3 different wind speeds were generated in the wind tunnel to induce different levels of turbulence. Moreover, each configuration was repeated 20 times, yielding a dataset of 18,000 measurements. The dataset was collected over a period of 16 months. The data is related to "On the performance of gas sensor arrays in open sampling systems using Inhibitory Support Vector Machines", by Vergara et al.[1]. The dataset can be accessed publicly at the UCI repository upon citation of [1]: http://archive.ics.uci.edu/ml/datasets/Gas+sensor+arrays+in+open+sampling+settings.
Tracking linkage to HIV care for former prisoners
Montague, Brian T.; Rosen, David L.; Solomon, Liza; Nunn, Amy; Green, Traci; Costa, Michael; Baillargeon, Jacques; Wohl, David A.; Paar, David P.; Rich, Josiah D.; Study Group, on behalf of the LINCS
2012-01-01
Improving testing and uptake to care among highly impacted populations is a critical element of Seek, Test, Treat and Retain strategies for reducing HIV incidence in the community. HIV disproportionately impacts prisoners. Though, incarceration provides an opportunity to diagnose and initiate therapy, treatment is frequently disrupted after release. Though model programs exist to support linkage to care on release, there is a lack of scalable metrics with which to assess adequacy of linkage to care after release. The linking data from Ryan White program Client Level Data (CLD) files reported to HRSA with corrections release data offers an attractive means of generating these metrics. Identified only by use of a confidential encrypted Unique Client Identifier (eUCI) these CLD files allow collection of key clinical indicators across the system of Ryan White funded providers. Using eUCIs generated from corrections release data sets as a linkage tool, the time to the first service at community providers along with key clinical indicators of patient status at entry into care can be determined as measures of linkage adequacy. Using this strategy, high and low performing sites can be identified and best practices can be identified to reproduce these successes in other settings. PMID:22561157
To develop a multi-site prospective clinical validation trial of the multigene diagnostic signature for the diagnosis of prostate cancer from non tumor containing biopsy tissue. Prostate cancer now affects one in five men in the U.S. It is diagnosed by examination of a biopsy sample of the prostate gland by a pathologist and treatment decisions such as the choice of surgery are usually not made without direct visualization of the presence of cancer by a pathologist. There are about one million such biopsy procedures in the U.S. every year. However about 1-200,000 are ambiguous owing to the absence of tumor but the presence of small changes such as atypical small acinar proliferations (ASAP) or proliferations within otherwise normal glands (PIN, prostate intraepithelial neoplasia) that are highly suspicious for cancer. Studies by the UCI/NCI SPECS project on prostate cancer have led to a new way to diagnosis the presence of prostate cancer in these ambiguous changes. Researchers of the UCI/NCI SPECS project observed that the tissue around a tumor called stroma has many altered gene activities that are caused by molecules secreted by the tumor cells. Indeed these studies revealed that 114 genes exhibited altered activity in stroma near tumor compared to normal stroma. These changes can be used as a “signature” to examine new samples to determine the “presence of-tumor”. Such a test has many applications. Currently ambiguous cases are asked to return for a repeat biopsy in 3 to 12 months – an agonizing period for patients during which they receive no guidance and during which any tumor may continue to grow and spread. Thus, the new test would detect tumor 3 to 12 months prior to conventional practice. This will avoid repeated biopsy procedures. Patients who are positive by the new test may consider whether immediate medical treatment or neo adjuvant treatment is appropriate. In addition the ability to detect presence-of-tumor early will avoid the necessity of waiting to have a repeat biopsy procedure. Finally the genes that undergo altered activity reveal fundamental information about how tumors alter the cellular environment. The National Cancer Institute (NCI) program called the Early Detection Research Network (ERDN) has agreed to support the continued development of the 114 gene signature for diagnosis of prostate cancer. Under this program, the 114 gene signature will undergo a series a studies designed to validate the accuracy and reliability. The gene signature will be applied to actual biopsies of consenting patients who have an ambiguous result for the first biopsy. Patients will be drawn from UCI, and the Orange County Urology Associates. All biopsy samples of this prospective clinical trial will analyzed in the UCI CLIA-approved Molecular Genetics Laboratory to determine the presence-of-tumor – a step which will facilitate eventual FDA approval for the test. The accuracy of these results will be compared to the answers determined by a pathologist examination of the repeat biopsy samples gathered from the same patients at 3 to 12 months later. The next step will be to apply the test to formalin-fixed and paraffin-embedded biopsy samples. This is the way that patients’ biopsy material across the country is preserved as part of the patients’ medical records. Once successfully validated, the new test can be applied to any patient who has had an “ambiguous” biopsy. The EDRN supported program will allow us to validate a new type of test for prostate cancer that will speed up diagnosis of ambiguous cases by providing early detection, provide guidance for treatment, avoid repeated biopsy procedures, and will reveal new information about the mechanisms involved in the development and growth of prostate cancer.
2014-01-01
system UAV unmanned aircraft vehicle UCI User -Computer Interface UCS UAS control segment Abbreviations xxix UGS unmanned ground system UGV unmanned ...made substantial progress in the deployment of more capable sensors, unmanned aircraft systems (UAS), and other unmanned systems (UxS). Innovative...progress in fielding more, and more capable unmanned aircraft systems (UAS) to meet the needs of warfighters
Diagnosing Parkinson's Diseases Using Fuzzy Neural System
Abiyev, Rahib H.; Abizade, Sanan
2016-01-01
This study presents the design of the recognition system that will discriminate between healthy people and people with Parkinson's disease. A diagnosing of Parkinson's diseases is performed using fusion of the fuzzy system and neural networks. The structure and learning algorithms of the proposed fuzzy neural system (FNS) are presented. The approach described in this paper allows enhancing the capability of the designed system and efficiently distinguishing healthy individuals. It was proved through simulation of the system that has been performed using data obtained from UCI machine learning repository. A comparative study was carried out and the simulation results demonstrated that the proposed fuzzy neural system improves the recognition rate of the designed system. PMID:26881009
Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.
Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou
2016-01-01
For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps
NASA Astrophysics Data System (ADS)
Tong, Rui; Komma, Jürgen
2017-04-01
The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.
Synthesis and Characterization of Thianthrene-Based Polyamides
1994-07-15
pyrrolidinone using triphenyl phosphite and pyridine. The fused-ring thianthrene-based polyamides were more soluble than analogous poly(thloether amide)s...pyrrolidinone using triphonyl phosphite and pyridine. The fused-ring thianthrene-based polyamides were more soluble than analogous poly(thloether amide)s...sodium hydroxide, and triphenyl phosphite (TPP) was vacuum distilled. UCI and CaCI2 were dried at 180 OC for 48 hours under vacuum. 4,4’-Oxydianiline
Smokeless Propellants as Vehicle Borne IED Main Charges: An Initial Threat Assessment
2008-01-01
uci: • danger clasa : (B) critical detonation height I 45 - 65 em. detonation danger , during fillin. material in mixing trough, in barrels as a in...Appendix A Examples ofMorphology Appendix B ATF List of Explosives Materials Appendix C Cabella Web Page Appendix D ATF Intelligence Report on Explosives...available for exploitation by violent extremist organizations and individuals. Discussion: Conventional explosive materials remain the most probable
Kirmitzoglou, Ioannis; Promponas, Vasilis J
2015-07-01
Local compositionally biased and low complexity regions (LCRs) in amino acid sequences have initially attracted the interest of researchers due to their implication in generating artifacts in sequence database searches. There is accumulating evidence of the biological significance of LCRs both in physiological and in pathological situations. Nonetheless, LCR-related algorithms and tools have not gained wide appreciation across the research community, partly due to the fact that only a handful of user-friendly software is currently freely available. We developed LCR-eXXXplorer, an extensible online platform attempting to fill this gap. LCR-eXXXplorer offers tools for displaying LCRs from the UniProt/SwissProt knowledgebase, in combination with other relevant protein features, predicted or experimentally verified. Moreover, users may perform powerful queries against a custom designed sequence/LCR-centric database. We anticipate that LCR-eXXXplorer will be a useful starting point in research efforts for the elucidation of the structure, function and evolution of proteins with LCRs. LCR-eXXXplorer is freely available at the URL http://repeat.biol.ucy.ac.cy/lcr-exxxplorer. vprobon@ucy.ac.cy Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Episodic air quality impacts of plug-in electric vehicles
NASA Astrophysics Data System (ADS)
Razeghi, Ghazal; Carreras-Sospedra, Marc; Brown, Tim; Brouwer, Jack; Dabdub, Donald; Samuelsen, Scott
2016-07-01
In this paper, the Spatially and Temporally Resolved Energy and Environment Tool (STREET) is used in conjunction with University of California Irvine - California Institute of Technology (UCI-CIT) atmospheric chemistry and transport model to assess the impact of deploying plug-in electric vehicles and integrating wind energy into the electricity grid on urban air quality. STREET is used to generate emissions profiles associated with transportation and power generation sectors for different future cases. These profiles are then used as inputs to UCI-CIT to assess the impact of each case on urban air quality. The results show an overall improvement in 8-h averaged ozone and 24-h averaged particulate matter concentrations in the South Coast Air Basin (SoCAB) with localized increases in some cases. The most significant reductions occur northeast of the region where baseline concentrations are highest (up to 6 ppb decrease in 8-h-averaged ozone and 6 μg/m3 decrease in 24-h-averaged PM2.5). The results also indicate that, without integration of wind energy into the electricity grid, the temporal vehicle charging profile has very little to no effect on urban air quality. With the addition of wind energy to the grid mix, improvement in air quality is observed while charging at off-peak hours compared to the business as usual scenario.
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.
Amudha, P; Karthik, S; Sivakumari, S
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Evaluation of Multimodal Imaging Biomarkers of Prostate Cancer
2014-09-01
scan duration ~ 21 min). PET imaging was performed on a Concorde Microsystems microPET Focus 220. Approximately 120 uCi of tracer was administered... PET tracer targeting translocator protein expression (TSPO), using 18F-VUIIS1008 (a probe developed in-house), and hypoxia, using 18F...manuscripts describing these efforts. First, we plan to submit a manuscript validating the use of the TSPO PET tracer developed in house in the Pten/p53
The Structure and Dynamics of the Solar Corona
NASA Technical Reports Server (NTRS)
Mikic, Zoran
1998-01-01
Under this contract SAIC, the University of California, Irvine (UCI), and the Jet Propulsion Laboratory (JPL), have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model. During the period covered by this report we have published 17 articles in the scientific literature. These publications are listed in Section 4 of this report. In the Appendix we have attached reprints of selected articles.
A study on the performance comparison of metaheuristic algorithms on the learning of neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2017-08-01
The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.
NASA Astrophysics Data System (ADS)
Shearer, E. J.; Nguyen, P.; Ombadi, M.; Palacios, T.; Huynh, P.; Furman, D.; Tran, H.; Braithwaite, D.; Hsu, K. L.; Sorooshian, S.; Logan, W. S.
2017-12-01
During the 2017 hurricane season, three major hurricanes-Harvey, Irma, and Maria-devastated the Atlantic coast of the US and the Caribbean Islands. Harvey set the record for the rainiest storm in continental US history, Irma was the longest-lived powerful hurricane ever observed, and Maria was the costliest storm in Puerto Rican history. The recorded maximum precipitation totals for these storms were 65, 16, and 20 inches respectively. These events provided the Center for Hydrometeorology and Remote Sensing (CHRS) an opportunity to test its global real-time satellite precipitation observation system, iRain, for extreme storm events. The iRain system has been under development through a collaboration between CHRS at the University of California, Irvine (UCI) and UNESCO's International Hydrological Program (IHP). iRain provides near real-time high resolution (0.04°, approx. 4km) global (60°N - 60°S) satellite precipitation data estimated by the PERSIANN-Cloud Classification System (PERSIANN-CCS) algorithm developed by the scientists at CHRS. The user-interactive and web-accessible iRain system allows users to visualize and download real-time global satellite precipitation estimates and track the development and path of the current 50 largest storms globally from data generated by the PERSIANN-CCS algorithm. iRain continuously proves to be an effective tool for measuring real-time precipitation amounts of extreme storms-especially in locations that do not have extensive rain gauge or radar coverage. Such areas include large portions of the world's oceans and over continents such as Africa and Asia. CHRS also created a mobile app version of the system named "iRain UCI", available for iOS and Android devices. During these storms, real-time rainfall data generated by PERSIANN-CCS was consistently comparable to radar and rain gauge data. This presentation evaluates iRain's efficiency as a tool for extreme precipitation monitoring and provides an evaluation of the PERSIANN-CCS real-time rainfall estimates during Hurricanes Harvey, Irma, and Maria in relation to radar and rain gauge data using continuous (correlation, root mean square error, and bias) and categorical (POD and FAR) indices. These results present the relative skill of PERSIANN-CCS real-time data to radar and rain gauge data.
Census of U.S. Civil Aircraft. Calendar Year 1981.
1981-12-31
that for 1978. This increase is partly due to the deregulation of the airlines under the Airline Deregulation Act of 1978 and the associated entry of...4. - AIR t UCY 4 .. ... 4 . .. AIRLIFT ASSOCIATES 2 .. ... ... 2 - AIR Li K 1 ... - I - AIR LOwsTIcS OF ALASKA, IC 4 ... 4 ..... AIR NEV AIRLINES 10...VALLEY 14 ... .... 14 --- -- Mtotu CARIEA AIRWAYS 4 --- ..... 1 2 1 MOUNTAIN HOmE AIR SERVICE 5 --- --- ---. 3 2 --- PMNz NORTHERN AIRLINEs, INC 7
Cohesion: Exploring the Myths and Opening the Veil
2008-03-24
connections among members of the collectivity.”24 Adding to these two components first identified by Durkheim ,25 values also play a fundamental...from http://eclectic.ss.uci.edu/~drwhite/soc_con17.pdf; Internet; accessed 5 November 2006 8 Bollen and Hoyle, 481, and Friedkin, 411. 9 Emile ... Durkheim , The Division of Labor in Society, trans. W.D. Halls (New York: The Free Press, [1893]; 1984), 24; quoted in Moody and White, 1. 10 For more
Intra-operative Cerenkov Imaging for Guiding Breast Cancer Surgery and Assessing Tumor Margins
2014-03-01
from 10 million to 100 billion, the simulation time followed a linear trend [Fig. 6( a ), r2 = 0.9998]. Each in - cremental one million...field. Cerenkov luminescence was detected up to a depth of 5 mm ( in tissue-mimicking material, given 100 uCi of activity ). We found that one of the ...distributed calculation of the intersection of a set of rays with a triangular mesh is challenging on the GPU. Monte
Design of Kinetic Energy Projectiles for Structural Integrity
1981-09-01
wear, and good pressure sealing experience. Unfortunately, the constitutive relations for these materials are highly temperature and rate of loading...41’ M IA 0 41 Lii uci a.O 49= z 445 Before any grooves are dimensioned, the maximum shear stress at the interface must be determined from a finite...concentrations in these sensitive materials. Filet radii at the root of the tooth should be increased to the maximum size consistent with good fit between
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.
2015-12-01
The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.
Use of Threshold of Toxicological Concern (TTC) with High ...
Although progress has been made with HTS (high throughput screening) in profiling biological activity (e.g., EPA’s ToxCast™), challenges arise interpreting HTS results in the context of adversity & converting HTS assay concentrations to equivalent human doses for the broad domain of commodity chemicals. Here, we propose using TTC as a risk screening method to evaluate exposure ranges derived from NHANES for 7968 chemicals. Because the well-established TTC approach uses hazard values derived from in vivo toxicity data, relevance to adverse effects is robust. We compared the conservative TTC (non-cancer) value of 90 μg/day (1.5 μg/kg/day) (Kroes et al., Fd Chem Toxicol, 2004) to quantitative exposure predictions of the upper 95% credible interval (UCI) of median daily exposures for 7968 chemicals in 10 different demographic groups (Wambaugh et al., Environ Sci Technol. 48:12760-7, 2014). Results indicate: (1) none of the median values of credible interval of exposure for any chemical in any demographic group was above the TTC; & (2) fewer than 5% of chemicals had an UCI that exceeded the TTC for any group. However, these median exposure predictions do not cover highly exposed (e.g., occupational) populations. Additionally, we propose an expanded risk-based screening workflow that comprises a TTC decision tree that includes screening compounds for structural alerts for DNA reactivity, OPs & carbamates as well as a comparison with bioactivity-based margins of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volotskova, O; Sun, C; Pratx, G
2014-06-15
Purpose: Cerenkov photons are produced when charged particles, emitted from radionuclides, travel through a media with a speed greater than that of the light in the media. Cerenkov radiation is mostly in the UV/Blue region and, thus, readily absorbed by biological tissue. Cerenkov Radiation Energy Transfer (CRET) is a wavelength-shifting phenomenon from blue Cerenkov light to more penetrating red wavelengths. We demonstrate the feasibility of in-depth imaging of CRET light originating from radionuclides realized by down conversion of gold nanoclusters (AuNCs, a novel particle composed of few atoms of gold coated with serum proteins) in vivo. Methods: Bovine Serum Albumin,more » Human Serum Albumin and Transferrin conjugated gold nanoclusters were synthesized, characterized and examined for CRET. Three different clinically used radiotracers: 18F-FDG, 90Y and 99mTc were used. Optical spectrum (440–750 nm) was recorded by sensitive bioluminescence imaging system at physiological temperature. Dose dependence (activity range from 0.5 up to 800uCi) and concentration dependence (0.01 to 1uM) studies were carried out. The compound was also imaged in a xenograft mouse model. Results: Only β+ and β--emitting radionuclides (18F-FDG, 90Y) are capable of CRET; no signal was found in 99mTc (γ-emitter). The emission peak of CRET by AuNCs was found to be ∼700 nm and was ∼3 fold times of background. In vitro studies showed a linear dependency between luminescence intensity and dose and concentration. CRET by gold nanoclusters was observed in xenografted mice injected with 100uCi of 18F-FDG. Conclusion: The unique optical, transport and chemical properties of AuNCs (gold nanoclusters) make them ideal candidates for in-vivo imaging applications. Development of new molecular imaging probes will allow us to achieve substantially improved spatiotemporal resolution, sensitivity and specificity for tumor imaging and detection.« less
NASA Astrophysics Data System (ADS)
Waldman, Amy Sue
I. Protein structure is not easily predicted from the linear sequence of amino acids. An increased ability to create protein structures would allow researchers to develop new peptide-based therapeutics and materials, and would provide insights into the mechanisms of protein folding. Toward this end, we have designed and synthesized two-stranded antiparallel beta-sheet mimics containing conformationally biased scaffolds and semicarbazide, urea, and hydrazide linker groups that attach peptide chains to the scaffold. The mimics exhibited populations of intramolecularly hydrogen-bonded beta-sheet-like conformers as determined by spectroscopic techniques such as FTIR, sp1H NMR, and ROESY studies. During our studies, we determined that a urea-hydrazide beta-strand mimic was able to tightly hydrogen bond to peptides in an antiparallel beta-sheet-like configuration. Several derivatives of the urea-hydrazide beta-strand mimic were synthesized. Preliminary data by electron microscopy indicate that the beta-strand mimics have an effect on the folding of Alzheimer's Abeta peptide. These data suggest that the urea-hydrazide beta-strand mimics and related compounds may be developed into therapeutics which effect the folding of the Abeta peptide into neurotoxic aggregates. II. In recent years, there has been concern about the low level of science literacy and science interest among Americans. A declining interest in science impacts the abilities of people to make informed decisions about technology. To increase the interest in science among secondary students, we have developed the UCI Chemistry Outreach Program to High Schools. The Program features demonstration shows and discussions about chemistry in everyday life. The development and use of show scripts has enabled large numbers of graduate and undergraduate student volunteers to demonstrate chemistry to more than 12,000 local high school students. Teachers, students, and volunteers have expressed their enjoyment of The UCI Chemistry Outreach Program to High Schools.
2009-12-01
Area IMPLND Impervious Land Cover INFILT Interflow Inflow Parameter (related to infiltration capacity of the soil ) INSUR Manning’s N for the...Km) SCCWRP Southern California Coastal Water Research Project SCS Soil Conservation Service SGA Shellfish Growing Area SPAWAR Space and Naval...UCI User Control Input USACE U.S. Army Corps of Engineers USEPA U.S. Environmental Protection Agency USGS U.S. Geological Survey xix USLE Universal
AmeriFlux US-CZ4 Sierra Critical Zone, Sierra Transect, Subalpine Forest, Shorthair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Michael
This is the AmeriFlux version of the carbon flux data for the site US-CZ4 Sierra Critical Zone, Sierra Transect, Subalpine Forest, Shorthair. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of four Southern Sierra Critical Zone Observatory flux towers operated along an elevation gradient (sites are USCZ1, USCZ2, USCZ3 and USCZ4). This site is a lodgepole pine subalpine woodland with no recent disturbance.
Adapting GNU random forest program for Unix and Windows
NASA Astrophysics Data System (ADS)
Jirina, Marcel; Krayem, M. Said; Jirina, Marcel, Jr.
2013-10-01
The Random Forest is a well-known method and also a program for data clustering and classification. Unfortunately, the original Random Forest program is rather difficult to use. Here we describe a new version of this program originally written in Fortran 77. The modified program in Fortran 95 needs to be compiled only once and information for different tasks is passed with help of arguments. The program was tested with 24 data sets from UCI MLR and results are available on the net.
Intraoperative Cerenkov Imaging for Guiding Breast Cancer Surgery and Assessing Tumor Margins
2012-12-01
Radiation Oncology. I regularly meet with both my mentor and co-mentor to discuss research progress. Over the summer, I have also taught a class on ...detection with a scintillator. Overall, due to the positron range, direct beta detection with a scintillator was limited to sources of radiation less than...luminescence was detected up to a depth of 5 mm (in tissue-mimicking material, given 100 uCi of activity). We found that one of the advantages of the
2012-02-29
surface and Swiss roll) and real-world data sets (UCI Machine Learning Repository [12] and USPS digit handwriting data). In our experiments, we use...less than µn ( say µ = 0.8), we can first use screening technique to select µn candidate nodes, and then apply BIPS on them for further selection and...identified from node j to node i. So we can say the probability for the existence of this connection is approximately 82%. Given the probability matrix
Implementation and Analysis of a Threat Model for IPv6 Host Autoconfiguration
2006-09-01
Collision Generator”, two Denial of Service attacks. The software was developed in NetBeans IDE 5.0, and the comments were converted to Javadoc with the...appropriate NetBeans function. A. ICMPV6 SUPPORT FOR JPCAP As the attack uses ICMPv6 messages, a means must be provided to generate these messages...ICMP packet. * * Developed in NetBeans IDE 5.0 * Makes use of Jpcap 0.5.1 library * (http://netresearch.ics.uci.edu/kfujii/jpcap/doc
The wisdom of the commons: ensemble tree classifiers for prostate cancer prognosis
Koziol, James A.; Feng, Anne C.; Jia, Zhenyu; Wang, Yipeng; Goodison, Seven; McClelland, Michael; Mercola, Dan
2009-01-01
Motivation: Classification and regression trees have long been used for cancer diagnosis and prognosis. Nevertheless, instability and variable selection bias, as well as overfitting, are well-known problems of tree-based methods. In this article, we investigate whether ensemble tree classifiers can ameliorate these difficulties, using data from two recent studies of radical prostatectomy in prostate cancer. Results: Using time to progression following prostatectomy as the relevant clinical endpoint, we found that ensemble tree classifiers robustly and reproducibly identified three subgroups of patients in the two clinical datasets: non-progressors, early progressors and late progressors. Moreover, the consensus classifications were independent predictors of time to progression compared to known clinical prognostic factors. Contact: dmercola@uci.edu PMID:18628288
The Structure and Dynamics of the Solar Corona and Inner Heliosphere
NASA Technical Reports Server (NTRS)
Mikic, Zoran
2002-01-01
This report covers technical progress during the second quarter of the first year of NASA Sun-Earth Connections Theory Program (SECTP) contract 'The Structure and Dynamics of the Solar Corona and Inner Heliosphere,' NAS5-99188, between NASA and Science Applications International Corporation. and covers the period November 16, 1999 to February 15, 2000. Under this contract SAIC and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD (magnetohydrodynamic) model. The topics studied include: the effect of emerging flux on the stability of helmet streamers, coronal loops and streamers, the solar magnetic field, the solar wind, and open magnetic field lines.
Bichutskiy, Vadim Y.; Colman, Richard; Brachmann, Rainer K.; Lathrop, Richard H.
2006-01-01
Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.) PMID:19458771
Qualitative Video Analysis of Track-Cycling Team Pursuit in World-Class Athletes.
Sigrist, Samuel; Maier, Thomas; Faiss, Raphael
2017-11-01
Track-cycling team pursuit (TP) is a highly technical effort involving 4 athletes completing 4 km from a standing start, often in less than 240 s. Transitions between athletes leading the team are obviously of utmost importance. To perform qualitative video analyses of transitions of world-class athletes in TP competitions. Videos captured at 100 Hz were recorded for 77 races (including 96 different athletes) in 5 international track-cycling competitions (eg, UCI World Cups and World Championships) and analyzed for the 12 best teams in the UCI Track Cycling TP Olympic ranking. During TP, 1013 transitions were evaluated individually to extract quantitative (eg, average lead time, transition number, length, duration, height in the curve) and qualitative (quality of transition start, quality of return at the back of the team, distance between third and returning rider score) variables. Determination of correlation coefficients between extracted variables and end time allowed assessment of relationships between variables and relevance of the video analyses. Overall quality of transitions and end time were significantly correlated (r = .35, P = .002). Similarly, transition distance (r = .26, P = .02) and duration (r = .35, P = .002) were positively correlated with end time. Conversely, no relationship was observed between transition number, average lead time, or height reached in the curve and end time. Video analysis of TP races highlights the importance of quality transitions between riders, with preferably swift and short relays rather than longer lead times for faster race times.
McWethy, D.B.; Austin, J.E.
2009-01-01
Little information exists on breeding Greater Sandhill Cranes (Grus canadensis tabida) in riparian wetlands of the Intermountain West. We examined the nesting ecology of Sandhill Cranes associated with riparian and palustrine wetlands in the Henry's Fork Watershed in eastern Idaho in 2003. We located 36 active crane nests, 19 in riparian wetlands and 17 in palustrine wetlands. Nesting sites were dominated by rushes (Juncus spp.), sedges (Carex spp.), Broad-leaved Cattail (Typha latifolia) and willow (Salix spp.), and adjacent foraging areas were primarily composed of sagebrush (Artemisia spp.), cinquefoil (Potentilla spp.),Rabbitbrush (Ericameria bloomeri) bunch grasses, upland forbs, Quaking Aspen (Populus tremuloides) and cottonwood (Populus spp.). Mean water depth surrounding nests was 23 cm (SD = 22). A majority of nests (61%) were surrounded by vegetation between 3060 cm, 23% by vegetation 60 cm in height. We were able to determine the fate of 29 nests, of which 20 were successful (69%). Daily nest survival was 0.986 (95% LCI 0.963, UCI 0.995), equivalent to a Mayfield nest success of 0.654 (95% LCI 0.324, UCI 0.853). Model selection favored models with the covariates vegetation type, vegetation height, and water depth. Nest survival increased with increasing water depth surrounding nest sites. Mean water depth was higher around successful nests (30 cm, SD = 21) than unsuccessful nests (15 cm, SD 22). Further research is needed to evaluate the relative contribution of cranes nesting in palustrine and riparian wetlands distributed widely across the Intermountain West.
Implementation of the biological passport: the experience of the International Cycling Union.
Zorzoli, Mario; Rossi, Francesca
2010-01-01
The concept of the biological passport is to evaluate, on an individual and longitudinal basis, the effects of doping substances and prohibited methods--blood doping and gene doping--on the body. Indirect biological markers can be measured and used to establish an individual's biological profile, when variations in an athlete's profile are found to be incompatible with physiological or medical conditions; a disciplinary procedure may be launched on the presumption that a prohibited substance or method has been used. As such, an athlete with a biological passport is his or her own reference. The International Cycling Union (UCI) launched the biological passport programme in January 2008 in cooperation with the World Anti-Doping Agency (WADA). The UCI programme includes more than 850 athletes. These athletes are subject to urinary and blood anti-doping tests both in- and out-of-competition several times a year. Almost 20 000 samples were collected in 2008 and 2009. In this article, the real-time process from sample collection to the opening of a disciplinary procedure is described. The establishment of this large-scale programme is discussed; the modalities which have to be applied and the difficulties encountered are presented. As for the results, some examples of normal and abnormal profiles are illustrated and indirect deterrent advantages of the programme are shown. Suggestions to improve the efficacy of the fight against doping through the implementation of the biological passport are discussed. Copyright © 2010 John Wiley & Sons, Ltd.
Communicating Flood Risk with Street-Level Data
NASA Astrophysics Data System (ADS)
Sanders, B. F.; Matthew, R.; Houston, D.; Cheung, W. H.; Karlin, B.; Schubert, J.; Gallien, T.; Luke, A.; Contreras, S.; Goodrich, K.; Feldman, D.; Basolo, V.; Serrano, K.; Reyes, A.
2015-12-01
Coastal communities around the world face significant and growing flood risks that require an accelerating adaptation response, and fine-resolution urban flood models could serve a pivotal role in enabling communities to meet this need. Such models depict impacts at the level of individual buildings and land parcels or "street level" - the same spatial scale at which individuals are best able to process flood risk information - constituting a powerful tool to help communities build better understandings of flood vulnerabilities and identify cost-effective interventions. To measure understanding of flood risk within a community and the potential impact of street-level models, we carried out a household survey of flood risk awareness in Newport Beach, California, a highly urbanized coastal lowland that presently experiences nuisance flooding from high tides, waves and rainfall and is expected to experience a significant increase in flood frequency and intensity with climate change. Interviews were completed with the aid of a wireless-enabled tablet device that respondents could use to identify areas they understood to be at risk of flooding and to view either a Federal Emergency Management Agency (FEMA) flood map or a more detailed map prepared with a hydrodynamic urban coastal flood model (UCI map) built with grid cells as fine as 3 m resolution and validated with historical flood data. Results indicate differences in the effectiveness of the UCI and FEMA maps at communicating the spatial distribution of flood risk, gender differences in how the maps affect flood understanding, and spatial biases in the perception of flood vulnerabilities.
A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.
Chang, Ivan; Baldi, Pierre
2013-05-15
Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.
SpArcFiRe: Scalable automated detection of spiral galaxy arm segments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Darren R.; Hayes, Wayne B., E-mail: drdavis@uci.edu, E-mail: whayes@uci.edu
Given an approximately centered image of a spiral galaxy, we describe an entirely automated method that finds, centers, and sizes the galaxy (possibly masking nearby stars and other objects if necessary in order to isolate the galaxy itself) and then automatically extracts structural information about the spiral arms. For each arm segment found, we list the pixels in that segment, allowing image analysis on a per-arm-segment basis. We also perform a least-squares fit of a logarithmic spiral arc to the pixels in that segment, giving per-arc parameters, such as the pitch angle, arm segment length, location, etc. The algorithm takesmore » about one minute per galaxies, and can easily be scaled using parallelism. We have run it on all ∼644,000 Sloan objects that are larger than 40 pixels across and classified as 'galaxies'. We find a very good correlation between our quantitative description of a spiral structure and the qualitative description provided by Galaxy Zoo humans. Our objective, quantitative measures of structure demonstrate the difficulty in defining exactly what constitutes a spiral 'arm', leading us to prefer the term 'arm segment'. We find that pitch angle often varies significantly segment-to-segment in a single spiral galaxy, making it difficult to define the pitch angle for a single galaxy. We demonstrate how our new database of arm segments can be queried to find galaxies satisfying specific quantitative visual criteria. For example, even though our code does not explicitly find rings, a good surrogate is to look for galaxies having one long, low-pitch-angle arm—which is how our code views ring galaxies. SpArcFiRe is available at http://sparcfire.ics.uci.edu.« less
The Structure and Dynamics of the Solar Corona
NASA Technical Reports Server (NTRS)
Mikic, Zoran
1998-01-01
This report covers technical progress during the first year of the NASA Space Physics Theory contract between NASA and Science Applications International Corporation. Under this contract SAIC, the University of California, Irvine (UCI), and the Jet Propulsion Laboratory (JPL), have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model. During the period covered by this report we have published 26 articles in the scientific literature. These publications are listed in Section 4 of this report. In the Appendix we have attached reprints of selected articles.
NASA Technical Reports Server (NTRS)
Mikic, Zoran; Grebowsky, Joseph M. (Technical Monitor)
2001-01-01
This report covers technical progress during the fourth quarter of the second year of NASA Sun-Earth Connections Theory Program (SECTP) contract 'The Structure and Dynamics of the Solar Corona and Inner Heliosphere,' NAS5-99188, between NASA and Science Applications International Corporation, and covers the period May 16,2001 to August 15, 2001. Under this contract SAIC and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model.
The Structure and Dynamics of the Solar Corona and Inner Heliosphere
NASA Technical Reports Server (NTRS)
Mikic, Zoran; Grebowsky, J. (Technical Monitor)
2002-01-01
This report covers technical progress during the fourth quarter of the second year of NASA Sun-Earth Connections Theory Program (SECTP) contract "The Structure and Dynamics of the Solar Corona and Inner Heliosphere," NAS5-99188, between NASA and Science Applications International Corporation (SAIC), and covers the period May 16, 2001 to August 15, 2001. Under this contract SAIC and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD (magnetohydrodynamic) model.
Southern California Drosophila Conference: Irvine, CA - September 11, 2009.
Rattner, Barbara P
2009-01-01
As has become tradition, this year's Southern California Drosophila Conference was hosted by the Developmental Biology Center (http://dbc.bio.uci.edu/) at the University of California, Irvine. On September 11, 2009, speakers from institutions in Los Angeles, Orange, Riverside, and San Diego Counties presented their latest results in an informal and friendly atmosphere and had the opportunity to learn about new resources and facilities, establish collaborations, and network about job openings and training opportunities. The talks presented covered the use of flies to study a variety of topics including the mechanisms of action of human pathogens, human diseases, and aging and lifespan extension. In addition, attendees heard about aspects of Drosophila neuronal development, olfactory behavior, transcriptional regulation, and signal transduction. Some of the highlights of the meeting are summarized in this brief report.
Random Bits Forest: a Strong Classifier/Regressor for Big Data
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li
2016-07-01
Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).
NASA Astrophysics Data System (ADS)
Schechinger, Linda Sue
I. To investigate the delivery of nucleotide-based drugs, we are studying molecular recognition of nucleotide derivatives in environments that are similar to cell membranes. The Nowick group previously discovered that membrane-like surfactant micelles tetradecyltrimethylammonium bromide (TTAB) micelle facilitate molecular of adenosine monophosphate (AMP) recognition. The micelles bind nucleotides by means of electrostatic interactions and hydrogen bonding. We observed binding by following 1H NMR chemical shift changes of unique hexylthymine protons upon addition of AMP. Cationic micelles are required for binding. In surfactant-free or sodium dodecylsulfate solutions, no hydrogen bonding is observed. These observations suggest that the cationic surfactant headgroups bind the nucleotide phosphate group, while the intramicellar base binds the nucleotide base. The micellar system was optimized to enhance binding and selectivity for adenosine nucleotides. The selectivity for adenosine and the number of phosphate groups attached to the adenosine were both investigated. Addition of cytidine, guanidine, or uridine monophosphates, results in no significant downfield shifting of the NH resonance. Selectivity for the phosphate is limited, since adenosine mono-, di-, and triphosphates all have similar binding constants. We successfully achieved molecular recognition of adenosine nucleotides in micellar environments. There is significant difference in the binding interactions between the adenosine nucleotides and three other natural nucleotides. II. The UCI Chemistry Outreach Program (UCICOP) addresses the declining interest of the nations youth for science. UCICOP brings fun and exciting chemistry experiments to local high schools, to remind students that science is fun and has many practical uses. Volunteer students and alumni of UCI perform the demonstrations using scripts and material provided by UCICOP. The preparation of scripts and materials is done by two coordinators. These coordinators organize the program and provide continuity to the program. The success of UCICOP can be measured by the high praise and gratitude expressed by the teachers, students and volunteers.
A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set
Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong
2012-01-01
Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181
Strause, Karl D; Zwiernik, Matthew J; Im, Sook Hyeon; Bradley, Patrick W; Moseley, Pamela P; Kay, Denise P; Park, Cyrus S; Jones, Paul D; Blankenship, Alan L; Newsted, John L; Giesy, John P
2007-07-01
The great horned owl (GHO; Bubo virginianus) was used in a multiple lines of evidence study of polychlorinated biphenyls (PCBs) and p,p'-dichlorodiphenyltrichloroethane (DDT) exposures at the Kalamazoo River Superfund Site (KRSS), Kalamazoo, Michigan, USA. The study examined risks from total PCBs, including 2,3,7,8-tetrachlorodibenzo-p-dioxin equivalents (TEQWorld Health Organization [WHO]-Avian Toxicity Equivalency Factor [TEF]), and total DDTs (sum of DDT, dichlorodiphenyldichloroethylene [DDE], and dichlorodiphenyldichloroethane [DDD]; sigmaDDT) by measuring concentrations in eggs and nestling blood plasma in two regions of the KRSS (upper, lower) and an upstream reference area (RA). An ecological risk assessment compared concentrations of the contaminants of concern (COCs) in eggs or plasma to toxicity reference values. Productivity and relative abundance measures for KRSS GHOs were compared with other GHO populations. Egg shell thickness was measured to assess effects of p,p'-DDE. The concentrations of PCBs in eggs were as great as 4.7 x 10(2) and 4.0 x 10(4) ng PCB/g, wet weight at the RA and combined KRSS sites, respectively. Egg TEQ(WHO-Avian) calculated from aryl hydrocarbon receptor-active PCB congeners and WHO TEFs ranged to 8.0 and 1.9 x 10(2) pg TEQ(WHO-Avian)/g, (wet wt) at the RA and combined KRSS, respectively. Egg sigmaDDT concentrations were as great as 4.2 x 10(2) and 5.0 x 10(3) ng sigmaDDT/g (wet wt) at the RA and combined KRSS, respectively. Hazard quotients (HQs) for the upper 95% confidence interval (UCI) (geometric mean) and least observable adverse effect concentration (LOAEC) for COCs in eggs were < or = 1.0 for all sites. Hazard quotient values based on the no observable adverse effect concentration (NOAEC) 95% UCI in eggs were < or = 1.0, except at the LKRSS (PCB HQ = 3.1; TEQ(WHO-Avian) HQ = 1.3). Productivity and relative abundance measures indicated no population level effects in the UKRSS.
AmeriFlux US-SCd Southern California Climate Gradient - Sonoran Desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Mike
This is the AmeriFlux version of the carbon flux data for the site US-SCd Southern California Climate Gradient - Sonoran Desert. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of six Southern California Climate Gradient flux towers operated along an elevation gradient (sites are US-SCg, US-SCs, US-SCf, US-SCw, US-SCc, US-SCd). This site is a low desert site in Southern California's rain shadow; the climate is extremely dry and hot. The site has experience repeated droughts, with negligible rainfall during several years of the record.
1990-01-01
A -IfCJ A ACI 11 LO n (O ? )-t ((D In ((D D-0- n InW ItJ0- W MJWW W .1 ..I<(-4 it I I 1-UCI 41) 0 (04 >1 :: o c" C c14 A CA ACCA C (,j AOCANAN" C CA C...7 )0470 C )) )4))M )( DD000 )0 )0 M0) 0) M0( F8 -00 0 0 8-481I8IM(0.4 it " 0) () r-000000000000000000000000000000000 00 0 ) ) -w I(0-4 it 1A
Nonlinear electromagnetic gyrokinetic particle simulations with the electron hybrid model
NASA Astrophysics Data System (ADS)
Nishimura, Y.; Lin, Z.; Chen, L.; Hahm, T.; Wang, W.; Lee, W.
2006-10-01
The electromagnetic model with fluid electrons is successfully implemented into the global gyrokinetic code GTC. In the ideal MHD limit, shear Alfven wave oscillation and continuum damping is demonstrated. Nonlinear electromagnetic simulation is further pursued in the presence of finite ηi. Turbulence transport in the AITG unstable β regime is studied. This work is supported by Department of Energy (DOE) Grant DE-FG02-03ER54724, Cooperative Agreement No. DE-FC02-04ER54796 (UCI), DOE Contract No. DE-AC02-76CH03073 (PPPL), and in part by SciDAC Center for Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas. Z. Lin, et al., Science 281, 1835 (1998). F. Zonca and L. Chen, Plasma Phys. Controlled Fusion 30, 2240 (1998); G. Zhao and L. Chen, Phys. Plasmas 9, 861 (2002).
Morphological transformation during activation and reaction of an iron Fischer-Tropsch catalyst
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, N.B.; Kohler, S.; Harrington, M.
1995-12-31
The purpose of this project is to support the development of slurry-phase bubble column processes being studied at the La Porte Alternative Fuel Development Unit. This paper describes the aspects of Sandia`s recent work regarding the advancement and understanding of the iron catalyst used in the slurry phase process. A number of techniques were used to understand the chemical and physical effects of pretreatment and reaction on the attrition and carbon deposition characteristics of iron catalysts. Unless otherwise stated, the data discussed was derived form experiments carried out on the catalyst chosen for the summer 1994 Fischer-Tropsch run at LaPorte,more » UCI 1185-78-370, (an L 3950 type) that is 88% Fe{sub 2}O{sub 3}, 11% CuO, and 0.052%K{sub 2}O.« less
Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo S.; Yang, Y.; Carbonell, J.
2011-10-24
Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. Inmore » contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.« less
Emerson, Jane F; Emerson, Scott S
2005-01-01
A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.
Valentine, Heather; Chen, Yi; Guo, Hongzhi; McCormick, Jocelyn; Wu, Yong; Sezen, Sena F.; Hoke, Ahmet; Burnett, Arthur L.; Steiner, Joseph P.
2009-01-01
Objectives We investigated the effects of the orally bioavailable non-immunosuppressive immunophilin ligand GPI 1046 (GPI) on erectile function and cavernous nerve (CN) histology following unilateral or bilateral crush injury (UCI, BCI, respectively) of the CNs. Methods Adult male Sprague-Dawley rats were administered GPI 15 mg/kg intraperitoneally (ip) or 30 mg/kg orally (po), FK506 1 mg/kg, ip, or vehicle controls for each route of administration just prior to UCI or BCI and daily up to 7 d following injury. At day 1 or 7 of treatment, erectile function induced by CN electrical stimulation was measured, and electron microscopic analysis of the injured CN was performed. Results Intraperitoneal administration of GPI to rats with injured CN protected erectile function, in a fashion similar to the prototypic immunophilin ligand FK506, compared with vehicle-treated animals (93% ± 9% vs. 70% ± 5% vs. 45% ± 1%, p < 0.01, respectively). Oral administration of GPI elicited the same level of significant protection from CN injury. GPI administered PO at 30 mg/kg/d, dosing either once daily or four times daily with 7.5 mg/kg, provided nearly complete protection of erectile function. In a more severe BCI model, PO administration of GPI maintained erectile function at 24 h after CN injury. Ultrastructural analysis of injured CNs indicated that GPI administered at the time of CN injury prevents degeneration of about 83% of the unmyelinated axons at 7 d after CN injury. Conclusions The orally administered immunophilin ligand GPI neuroprotects CNs and maintains erectile function in rats under various conditions of CN crush injury. PMID:17145129
Gene expression inference with deep learning.
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-06-15
Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gene expression inference with deep learning
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-01-01
Motivation: Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. Results: We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. Availability and implementation: D-GEX is available at https://github.com/uci-cbcl/D-GEX. Contact: xhx@ics.uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26873929
Fast and Accurate Support Vector Machines on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry
Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less
Instances selection algorithm by ensemble margin
NASA Astrophysics Data System (ADS)
Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine
2018-05-01
The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.
Reduction of peak energy demand based on smart appliances energy consumption adjustment
NASA Astrophysics Data System (ADS)
Powroźnik, P.; Szulim, R.
2017-08-01
In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.
Prediction of the Swamping Tendencies of Recreational Boats.
1982-01-01
Price Unclassified Uncl ass ified 369 Form DOT F 1700.7 (0-72) Reproduction of completed page authorized ii ! _, . , . .- ’-: " ’’ " _ , .L...Possibly, structures such as railings or hand-holds which make human access to the boat ends difficult would have to be required. This step may...0 M, N ’ . : aI -’ -4 - 44 --4 - - M. e4’ u Q: ft P.7 I I a u" M% w Na - fu IVf uCI f N IV CY .4. C- w4* - Y IV40 t5.4 nocy4 t N cu N M w4 w 14
AmeriFlux US-CZ2 Sierra Critical Zone, Sierra Transect, Ponderosa Pine Forest, Soaproot Saddle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Michael
This is the AmeriFlux version of the carbon flux data for the site US-CZ2 Sierra Critical Zone, Sierra Transect, Ponderosa Pine Forest, Soaproot Saddle. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of four Southern Sierra Critical Zone Observatory flux towers operated along an elevation gradient (sites are USCZ1, USCZ2, USCZ3 and USCZ4). This site is an oak/pine forest, with occasional thinning and wildfire, a prescribed understory burn ~2012, and severe drought and ~80% canopy mortality in 2011-15
AmeriFlux US-CZ3 Sierra Critical Zone, Sierra Transect, Sierran Mixed Conifer, P301
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Michael
This is the AmeriFlux version of the carbon flux data for the site US-CZ3 Sierra Critical Zone, Sierra Transect, Sierran Mixed Conifer, P301. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of four Southern Sierra Critical Zone Observatory flux towers operated along an elevation gradient (sites are USCZ1, USCZ2, USCZ3 and USCZ4). This site is a pine/fir forest; it historically experienced logging and wildfire, was thinned in ~2012, and experienced severe drought and ~20% canopy mortality in 2011-15.
AmeriFlux US-SCf Southern California Climate Gradient - Oak/Pine Forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Mike
This is the AmeriFlux version of the carbon flux data for the site US-SCf Southern California Climate Gradient - Oak/Pine Forest. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of six Southern California Climate Gradient flux towers operated along an elevation gradient (sites are US-SCg, US-SCs, US-SCf, US-SCw, US-SCc, US-SCd). This site is a mixed oak/pine forest. The site experiences episodic severe drought and mortality, and has also experienced occasional logging and wildfire. Drought and mortality was especially severe in the early 2000s.
Fast Reduction Method in Dominance-Based Information Systems
NASA Astrophysics Data System (ADS)
Li, Yan; Zhou, Qinghua; Wen, Yongchuan
2018-01-01
In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.
Prime Contract Awards Over $25,000 by Major System, Contractor and State. Part 1. (AAA-BSG)
1989-01-01
estimated to average I hour per response. including the time for reviewng instructions searching existing data sources,gathering and maintaining the data ...JeffersonDavis highway. Suite 1204, Arlington, VA 22202,4302, and to the Office of Management and Budget. Paperwork Reduction Prolect (0704.018I), Washington, DC...IN W) MNMP l )C)C) ciMP l l1-I z cJ~ ICC cl IN C) 0(Du’)-. It) Nr.~- (DM In- - 141 I 4~~ t 06 0. E E I I CLcIcII T . I I UcI . u ) c ct)a 4)C ) wa
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
Solid Waste Management in Marine Amphibious Force (MAF) Operations: Analysis and Alternatives.
1980-09-01
Experience during the Southeast Asia conflict and elsewhere shows that MAF solid waste management requires a significant deployment of manpower and equipment...MAF varies, by necessity, with the location or type of military action. Based in part on recent experience gained in the Southeast Asia conflict, a...4 1- r 410 0 0 MCD %4 0. 0 0> 0 0 0.Z5 -4 4r44. ,4-14 1 cc U 44 UCI 4 4 0 -401. 4 0 0 U2fl $ICɜ 41 *4 0J 4)% 4 4) u ~ .)ails- ii , )4 -4 to 40 be
NASA Astrophysics Data System (ADS)
Martin, P.; Ehlmann, B. L.; Blaney, D. L.; Bhartia, R.; Allwood, A.
2015-12-01
Using the recently developed Ultra Compact Imaging Spectrometer (UCIS) (0.4-2.5 μm) to generate outcrop-scale infrared images and compositional maps, a Mars-relevant field site near China Ranch in the Mojave Desert has been surveyed and sampled to analyze the synergies between instruments in the Mars 2020 rover instrument suite. The site is broadly comprised of large lacustrine gypsum beds with fine-grained gypsiferous mudstones and interbedded volcanic ashes deposited in the Pleistocene, with a carbonate unit atop the outcrop. Alteration products such as clays and iron oxides are pervasive throughout the sequence. Mineralogical mapping of the outcrop was performed using UCIS. As the 2020 rover will have an onboard multispectral camera and IR point spectrometer, Mastcam-Z and SuperCam, this process of spectral analysis leading to the selection of sites for more detailed investigation is similar to the process by which samples will be selected for increased scrutiny during the 2020 mission. The infrared image is resampled (spatially and spectrally) to the resolutions of Mastcam-Z and SuperCam to simulate data from the Mars 2020 rover. Hand samples were gathered in the field (guided by the prior infrared compositional mapping), capturing samples of spectral and mineralogical variance in the scene. After collection, a limited number of specimens were chosen for more detailed analysis. The hand samples are currently being analyzed using JPL prototypes of the Mars 2020 arm-mounted contact instruments, specifically PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning Habitable Environments with Raman & Luminescence). The geologic story as told by the Mars 2020 instrument data will be analyzed and compared to the full suite of data collected by hyperspectral imaging and terrestrial techniques (e.g. XRD) applied to the collected hand samples. This work will shed light on the potential uses and synergies of the Mars 2020 instrument suite, especially with regards to spectral (i.e. remote) recognition of important and interesting samples on which to do contact science.
An automated diagnosis system of liver disease using artificial immune and genetic algorithms.
Liang, Chunlin; Peng, Lingxi
2013-04-01
The rise of health care cost is one of the world's most important problems. Disease prediction is also a vibrant research area. Researchers have approached this problem using various techniques such as support vector machine, artificial neural network, etc. This study typically exploits the immune system's characteristics of learning and memory to solve the problem of liver disease diagnosis. The proposed system applies a combination of two methods of artificial immune and genetic algorithm to diagnose the liver disease. The system architecture is based on artificial immune system. The learning procedure of system adopts genetic algorithm to interfere the evolution of antibody population. The experiments use two benchmark datasets in our study, which are acquired from the famous UCI machine learning repository. The obtained diagnosis accuracies are very promising with regard to the other diagnosis system in the literatures. These results suggest that this system may be a useful automatic diagnosis tool for liver disease.
AmeriFlux US-SCs Southern California Climate Gradient - Coastal Sage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Mike
This is the AmeriFlux version of the carbon flux data for the site US-SCs Southern California Climate Gradient - Coastal Sage. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of six Southern California Climate Gradient flux towers operated along an elevation gradient (sites are US-SCg, US-SCs, US-SCf, US-SCw, US-SCc, US-SCd). This site is a coastal sage shrubland. Coastal sage is a small stature, closed canopy vegetation dominated by drought deciduous shrubs. The site has historically burned every 10-20 years, with the wild fire in October 2007. The tower data sets includes this recovery process.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.
Jin, Suoqin; MacLean, Adam L; Peng, Tao; Nie, Qing
2018-02-05
Single-cell RNA-sequencing (scRNA-seq) offers unprecedented resolution for studying cellular decision-making processes. Robust inference of cell state transition paths and probabilities is an important yet challenging step in the analysis of these data. Here we present scEpath, an algorithm that calculates energy landscapes and probabilistic directed graphs in order to reconstruct developmental trajectories. We quantify the energy landscape using "single-cell energy" and distance-based measures, and find that the combination of these enables robust inference of the transition probabilities and lineage relationships between cell states. We also identify marker genes and gene expression patterns associated with cell state transitions. Our approach produces pseudotemporal orderings that are - in combination - more robust and accurate than current methods, and offers higher resolution dynamics of the cell state transitions, leading to new insight into key transition events during differentiation and development. Moreover, scEpath is robust to variation in the size of the input gene set, and is broadly unsupervised, requiring few parameters to be set by the user. Applications of scEpath led to the identification of a cell-cell communication network implicated in early human embryo development, and novel transcription factors important for myoblast differentiation. scEpath allows us to identify common and specific temporal dynamics and transcriptional factor programs along branched lineages, as well as the transition probabilities that control cell fates. A MATLAB package of scEpath is available at https://github.com/sqjin/scEpath. qnie@uci.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911
Monitoring Global Precipitation through UCI CHRS's RainMapper App on Mobile Devices
NASA Astrophysics Data System (ADS)
Nguyen, P.; Huynh, P.; Braithwaite, D.; Hsu, K. L.; Sorooshian, S.
2014-12-01
The Water and Development Information for Arid Lands-a Global Network (G-WADI) Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks—Cloud Classification System (PERSIANN-CCS) GeoServer has been developed through a collaboration between the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine (UCI) and the UNESCO's International Hydrological Program (IHP). G-WADI PERSIANN-CCS GeoServer provides near real-time high resolution (0.04o, approx 4km) global (60oN - 60oS) satellite precipitation estimated by the PERSIANN-CCS algorithm developed by the scientists at CHRS. The G-WADI PERSIANN-CCS GeoServer utilizes the open-source MapServer software from the University of Minnesota to provide a user-friendly web-based mapping and visualization of satellite precipitation data. Recent efforts have been made by the scientists at CHRS to provide free on-the-go access to the PERSIANN-CCS precipitation data through an application named RainMapper for mobile devices. RainMapper provides visualization of global satellite precipitation of the most recent 3, 6, 12, 24, 48 and 72-hour periods overlaid with various basemaps. RainMapper uses the Google maps application programing interface (API) and embedded global positioning system (GPS) access to better monitor the global precipitation data on mobile devices. Functionalities include using geographical searching with voice recognition technologies make it easy for the user to explore near real-time precipitation in a certain location. RainMapper also allows for conveniently sharing the precipitation information and visualizations with the public through social networks such as Facebook and Twitter. RainMapper is available for iOS and Android devices and can be downloaded (free) from the App Store and Google Play. The usefulness of RainMapper was demonstrated through an application in tracking the evolution of the recent Rammasun Typhoon over the Philippines in mid July 2014.
Downing, Julia; Ddungu, Henry; Kiyange, Fatia; Batuli, Mwazi; Kafeero, James; Kebirungi, Harriet; Kiwanuka, Rose; Mugisha, Noleb; Mwebesa, Eddie; Mwesiga, Mark; Namukwaya, Elizabeth; Niyonzima, Nixon; Phipps, Warren; Orem, Jackson
2017-01-01
The Uganda Cancer Institute (UCI) and the Palliative Care Association of Uganda (PCAU) jointly hosted an international conference on cancer and palliative care in August 2017 in Kampala, Uganda. At the heart of the conference rested a common commitment to see patient care improved across Uganda and the region. The theme – United Against Cancer: Prevention to End-of-Life Care – reflected this joint vision and the drive to remember that cancer care should include prevention, early diagnosis and screening, treatment, rehabilitation and palliative care. The conference brought together 451 delegates from 17 countries. The key themes of the conference included: the importance of the World Health Assembly Resolutions on Palliative Care (2014) and cancer care (2017); the need to develop a National Cancer Control Programme; strategies for effective cancer diagnosis and treatment in low- and middle-income countries; advocacy, human rights and access to essential medicines, including access to opioids and nurse prescribing; paediatric care; leadership and commitment; collaboration; resources (financial and human), the recognition that palliative care is not limited to cancer care and the importance of learning from each other. The conference also gave the opportunity to celebrate the 50th Anniversary of the UCI, with a celebration dinner attended by the Minister of Health and the US Ambassador. Participants reported that the conference was a forum that updated them in all aspects of cancer and palliative care, which challenged their knowledge, and was enlightening in terms of current treatment options for individuals with cancer. The benefits of having a joint conference were recognised, allowing for further networking between cancer and palliative care organisations. This conference, highlighting many developments in cancer and palliative care, served as a unique opportunity to bring people together and unite them in developing cancer and palliative care. PMID:29290759
Downing, Julia; Ddungu, Henry; Kiyange, Fatia; Batuli, Mwazi; Kafeero, James; Kebirungi, Harriet; Kiwanuka, Rose; Mugisha, Noleb; Mwebesa, Eddie; Mwesiga, Mark; Namukwaya, Elizabeth; Niyonzima, Nixon; Phipps, Warren; Orem, Jackson
2017-01-01
The Uganda Cancer Institute (UCI) and the Palliative Care Association of Uganda (PCAU) jointly hosted an international conference on cancer and palliative care in August 2017 in Kampala, Uganda. At the heart of the conference rested a common commitment to see patient care improved across Uganda and the region. The theme - United Against Cancer: Prevention to End-of-Life Care - reflected this joint vision and the drive to remember that cancer care should include prevention, early diagnosis and screening, treatment, rehabilitation and palliative care. The conference brought together 451 delegates from 17 countries. The key themes of the conference included: the importance of the World Health Assembly Resolutions on Palliative Care (2014) and cancer care (2017); the need to develop a National Cancer Control Programme; strategies for effective cancer diagnosis and treatment in low- and middle-income countries; advocacy, human rights and access to essential medicines, including access to opioids and nurse prescribing; paediatric care; leadership and commitment; collaboration; resources (financial and human), the recognition that palliative care is not limited to cancer care and the importance of learning from each other. The conference also gave the opportunity to celebrate the 50th Anniversary of the UCI, with a celebration dinner attended by the Minister of Health and the US Ambassador. Participants reported that the conference was a forum that updated them in all aspects of cancer and palliative care, which challenged their knowledge, and was enlightening in terms of current treatment options for individuals with cancer. The benefits of having a joint conference were recognised, allowing for further networking between cancer and palliative care organisations. This conference, highlighting many developments in cancer and palliative care, served as a unique opportunity to bring people together and unite them in developing cancer and palliative care.
Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis
NASA Astrophysics Data System (ADS)
Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.
2018-03-01
Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.
CircadiOmics: circadian omic web portal.
Ceglia, Nicholas; Liu, Yu; Chen, Siwei; Agostinelli, Forest; Eckel-Mahan, Kristin; Sassone-Corsi, Paolo; Baldi, Pierre
2018-06-15
Circadian rhythms play a fundamental role at all levels of biological organization. Understanding the mechanisms and implications of circadian oscillations continues to be the focus of intense research. However, there has been no comprehensive and integrated way for accessing and mining all circadian omic datasets. The latest release of CircadiOmics (http://circadiomics.ics.uci.edu) fills this gap for providing the most comprehensive web server for studying circadian data. The newly updated version contains high-throughput 227 omic datasets corresponding to over 74 million measurements sampled over 24 h cycles. Users can visualize and compare oscillatory trajectories across species, tissues and conditions. Periodicity statistics (e.g. period, amplitude, phase, P-value, q-value etc.) obtained from BIO_CYCLE and other methods are provided for all samples in the repository and can easily be downloaded in the form of publication-ready figures and tables. New features and substantial improvements in performance and data volume make CircadiOmics a powerful web portal for integrated analysis of circadian omic data.
A Reduced Set of Features for Chronic Kidney Disease Prediction
Misir, Rajesh; Mitra, Malay; Samanta, Ranjit Kumar
2017-01-01
Chronic kidney disease (CKD) is one of the life-threatening diseases. Early detection and proper management are solicited for augmenting survivability. As per the UCI data set, there are 24 attributes for predicting CKD or non-CKD. At least there are 16 attributes need pathological investigations involving more resources, money, time, and uncertainties. The objective of this work is to explore whether we can predict CKD or non-CKD with reasonable accuracy using less number of features. An intelligent system development approach has been used in this study. We attempted one important feature selection technique to discover reduced features that explain the data set much better. Two intelligent binary classification techniques have been adopted for the validity of the reduced feature set. Performances were evaluated in terms of four important classification evaluation parameters. As suggested from our results, we may more concentrate on those reduced features for identifying CKD and thereby reduces uncertainty, saves time, and reduces costs. PMID:28706750
Lin, Kuan-Cheng; Hsieh, Yi-Hsiu
2015-10-01
The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.
A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.
Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian
2017-01-01
Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.
Conditional Anomaly Detection with Soft Harmonic Functions
Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos
2012-01-01
In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions. PMID:25309142
Conditional Anomaly Detection with Soft Harmonic Functions.
Valko, Michal; Kveton, Branislav; Valizadegan, Hamed; Cooper, Gregory F; Hauskrecht, Milos
2011-01-01
In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response or a class label. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method on several synthetic and UCI ML datasets in detecting unusual labels when compared to several baseline approaches. We also evaluate the performance of our method on a real-world electronic health record dataset where we seek to identify unusual patient-management decisions.
NASA Astrophysics Data System (ADS)
Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad
2014-10-01
Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.
Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.
NASA Astrophysics Data System (ADS)
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
NASA Astrophysics Data System (ADS)
Rahmadani, S.; Dongoran, A.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses the problem of feature selection using genetic algorithms on a dataset for classification problems. The classification model used is the decicion tree (DT), and Naive Bayes. In this paper we will discuss how the Naive Bayes and Decision Tree models to overcome the classification problem in the dataset, where the dataset feature is selectively selected using GA. Then both models compared their performance, whether there is an increase in accuracy or not. From the results obtained shows an increase in accuracy if the feature selection using GA. The proposed model is referred to as GADT (GA-Decision Tree) and GANB (GA-Naive Bayes). The data sets tested in this paper are taken from the UCI Machine Learning repository.
The Structure and Dynamics of the Solar Corona and Inner Heliosphere
NASA Technical Reports Server (NTRS)
Mikic, Zoran; Grebowsky, J. (Technical Monitor)
2001-01-01
This report covers technical progress during the third quarter of the second year of NASA Sun-Earth Connections Theory Program (SECTP) contract 'The Structure and Dynamics of the Solar Corona and Inner Heliosphere,' NAS5-99188, between NASA and Science Applications International Corporation, and covers the period February 16, 2001 to May 15, 2001. Under this contract SAIC and the University of California, Irvine (UCI) have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model.In this report we summarize the accomplishments made by our group during the first seven quarters of our Sun-Earth Connection Theory Program contract. The descriptions are intended to illustrate our principal results. A full account can be found in the referenced publications.
AmeriFlux US-SCg Southern California Climate Gradient - Grassland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Mike
This is the AmeriFlux version of the carbon flux data for the site US-SCg Southern California Climate Gradient - Grassland. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of six Southern California Climate Gradient flux towers operated along an elevation gradient (sites are US-SCg, US-SCs, US-SCf, US-SCw, US-SCc, US-SCd). This site is a grassland that was historically dominated by exotic annuals and that underwent restoration with a focus on native bunch grasses in the 2010s. The site has historically burned every 10-20 years, with a wildfire in October 2007. The restoration involved several yearsmore » of mowing and herbicide application to suppress exotics followed by dense planting of Nasella bunch grasses.« less
Davis Monthan AFB Tucson, Arizona. Revised Uniform Summary of Surface Weather Observations (RUSSWO)
1976-07-01
ATA PRUCCSSING BRANC UAF ElA CEILING VERSUS VISIBILITY +-AIR W/ kAll ’ , SIRVICEI/1iC 2JIU9 D, V1S MU 1IMA, AFB AkIZONAUC51UCI) ,)-45peif8-75 DEC 5T+O... 1971 蔵 69j "l 1 .74 681 0 2 2 626 103 -42/;Ol3j 14 - 1120 L -,AN 4--- - -- - 27, 313 ____ ____- --- K -- --±-183 ni n 166 42 mn Wkl I _ 1 1...161 1971 187 5 1 541 53 1 1 .2 . .4 .1 .7 l.3 .4 .7 #. 1 1__6 L 1_0 11 2 58/ 47,*00 0 0 94 *l .2 iol2 5-- 49 .Q .49 . .7 .6 .7 .5 o0 10_
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Benefits Analysis of Smart Grid Projects. White paper, 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Liu, Liping; Yu, JianCheng
Smart grids are rolling out internationally, with the United States (U.S.) nearing completion of a significant USD4-plus-billion federal program funded under the American Recovery and Reinvestment Act (ARRA-2009). The emergence of smart grids is widespread across developed countries. Multiple approaches to analyzing the benefits of smart grids have emerged. The goals of this white paper are to review these approaches and analyze examples of each to highlight their differences, advantages, and disadvantages. This work was conducted under the auspices of a joint U.S.-China research effort, the Climate Change Working Group (CCWG) Implementation Plan, Smart Grid. We present comparative benefits assessmentsmore » (BAs) of smart grid demonstrations in the U.S. and China along with a BA of a pilot project in Europe. In the U.S., we assess projects at two sites: (1) the University of California, Irvine campus (UCI), which consists of two distinct demonstrations: Southern California Edison’s (SCE) Irvine Smart Grid Demonstration Project (ISGD) and the UCI campus itself; and (2) the Navy Yard (TNY) area in Philadelphia, which has been repurposed as a mixed commercial-industrial, and possibly residential, development. In China, we cover several smart-grid aspects of the Sino-Singapore Tianjin Eco-city (TEC) and the Shenzhen Bay Technology and Ecology City (B-TEC). In Europe, we look at a BA of a pilot smart grid project in the Malagrotta area west of Rome, Italy, contributed by the Joint Research Centre (JRC) of the European Commission. The Irvine sub-project BAs use the U.S. Department of Energy (U.S. DOE) Smart Grid Computational Tool (SGCT), which is built on methods developed by the Electric Power Research Institute (EPRI). The TEC sub-project BAs apply Smart Grid Multi-Criteria Analysis (SG-MCA) developed by the State Grid Corporation of China (SGCC) based on the analytic hierarchy process (AHP) with fuzzy logic. The B-TEC and TNY sub-project BAs are evaluated using new approaches developed by those project teams. JRC has adopted an approach similar to EPRI’s but tailored to the Malagrotta distribution grid.« less
Design of fuzzy classifier for diabetes disease using Modified Artificial Bee Colony algorithm.
Beloufa, Fayssal; Chikh, M A
2013-10-01
In this study, diagnosis of diabetes disease, which is one of the most important diseases, is conducted with artificial intelligence techniques. We have proposed a novel Artificial Bee Colony (ABC) algorithm in which a mutation operator is added to an Artificial Bee Colony for improving its performance. When the current best solution cannot be updated, a blended crossover operator (BLX-α) of genetic algorithm is applied, in order to enhance the diversity of ABC, without compromising with the solution quality. This modified version of ABC is used as a new tool to create and optimize automatically the membership functions and rules base directly from data. We take the diabetes dataset used in our work from the UCI machine learning repository. The performances of the proposed method are evaluated through classification rate, sensitivity and specificity values using 10-fold cross-validation method. The obtained classification rate of our method is 84.21% and it is very promising when compared with the previous research in the literature for the same problem. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin
2015-08-01
Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
NASA Astrophysics Data System (ADS)
Liu, Fang; Cao, San-xing; Lu, Rui
2012-04-01
This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.
Recent Changes in Ices Mass Balance of the Amundsen Sea Sector
NASA Astrophysics Data System (ADS)
Sutterley, T. C.; Velicogna, I.; Rignot, E. J.; Mouginot, J.; Flament, T.; van den Broeke, M. R.; van Wessem, M.; Reijmer, C.
2014-12-01
The glaciers flowing into the Amundsen Sea Embayment (ASE) sector of West Antarctica were confirmed in the Ice Sheet Mass Balance Inter-comparison Exercise (IMBIE) to be the dominant contributors to the current Antarctic ice mass loss, and recently recognized to be undergoing marine ice sheet instability. Here, we investigate their regional ice mass balance using a time series of satellite and airborne data combined with model output products from the Regional Atmospheric and Climate Model (RACMO). Our dataset includes laser altimetry from NASA's ICESat-1 satellite mission and from Operation IceBridge (OIB) airborne surveys, satellite radar altimetry data from ESA's Envisat mission, time-variable gravity data from NASA/DLR's GRACE mission, surface mass balance products from RACMO, ice velocity from a combination of international synthetic aperture radar satellites and ice thickness data from OIB. We find a record of ice mass balance for the ASE where all the analyzed techniques agree remarkably in magnitude and temporal variability. The mass loss of the region has been increasing continuously since 1992, with no indication of a slow down. The mass loss during the common period averaged 91 Gt/yr and accelerated 20 Gt/yr2. In 1992-2013, the ASE contributed 4.5 mm global sea level rise. Overall, our results demonstrate the synergy of multiple analysis techniques for examining Antarctic Ice Sheet mass balance at the regional scale. This work was performed at UCI and JPL under a contract with NASA.
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm
Wu, Haizhou; Luo, Qifang
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044
A Case-Based Reasoning Method with Rank Aggregation
NASA Astrophysics Data System (ADS)
Sun, Jinhua; Du, Jiao; Hu, Jian
2018-03-01
In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.
Refinery suppliers face tough times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotman, D.; Walsh, K.
1997-03-12
Despite a handful of bright spots in hydroprocessing and petrochemical sectors, economic woes plague much of the refinery and petrochemical catalysts business, as suppliers are feeling the impact of mature markets and refiners` ongoing cost cutting. Industry experts say the doldrums could spur further restructuring in the catalyst business, with suppliers scrambling for market share and jockeying for position in growing sectors. Expect further consolidation over the next several years, says Pierre Bonnifay, president of IFP Enterprises (New York). {open_quotes}There are still too many players for the mature [refinery catalyst] markets.{close_quotes} Others agree. {open_quotes}Only about seven [or] eight major suppliersmore » will survive,{close_quotes} says Robert Allsmiller, v.p./refinery and petrochemical catalysts at United Catalysts Inc. (UCI; Louisville, KY). {open_quotes}Who they [will be] is still up in the air.{close_quotes}« less
Inner and Outer Recursive Neural Networks for Chemoinformatics Applications.
Urban, Gregor; Subrahmanya, Niranjan; Baldi, Pierre
2018-02-26
Deep learning methods applied to problems in chemoinformatics often require the use of recursive neural networks to handle data with graphical structure and variable size. We present a useful classification of recursive neural network approaches into two classes, the inner and outer approach. The inner approach uses recursion inside the underlying graph, to essentially "crawl" the edges of the graph, while the outer approach uses recursion outside the underlying graph, to aggregate information over progressively longer distances in an orthogonal direction. We illustrate the inner and outer approaches on several examples. More importantly, we provide open-source implementations [available at www.github.com/Chemoinformatics/InnerOuterRNN and cdb.ics.uci.edu ] for both approaches in Tensorflow which can be used in combination with training data to produce efficient models for predicting the physical, chemical, and biological properties of small molecules.
NASA Astrophysics Data System (ADS)
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na
2016-10-01
Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.
NASA Technical Reports Server (NTRS)
Calle, Luz Marina; Hintze, Paul E.; Parlier, Christopher R.; Coffman, Brekke E.; Sampson, Jeffrey W.; Kolody, Mark R.; Curran, Jerome P.; Perusich, Stephen A.; Trejo, David; Whitten, Mary C.;
2009-01-01
A trade study and litera ture survey of refractory materials (fi rebrick. refractory concrete. and si licone and epoxy ablatives) were conducted to identify candidate replacement materials for Launch Complexes 39A and 398 at Kennedy Space Center (KSC). In addition, site vis its and in terviews with industry expens and vendors of refractory materials were conducted. As a result of the si te visits and interviews, several products were identified for launch applications. Firebrick is costly to procure and install and was not used in the si tes studied. Refractory concrete is gunnable. adheres well. and costs less 10 install. Martyte. a ceramic fi lled epoxy. can protect structural stccl but is costly. difficullto apply. and incompatible with silicone ablatives. Havanex, a phenolic ablative material, is easy to apply but is costly and requires frequent replacement. Silicone ablatives are ineJ[pensive, easy to apply. and perl'onn well outside of direct rocket impingement areas. but refractory concrete and epoxy ablatives provide better protection against direcl rocket exhaust. None of the prodUCIS in this trade study can be considered a panacea for these KSC launch complexes. but the refractory products. individually or in combination, may be considered for use provided the appropriate testing requirements and specifications are met.
Galleske, I; Castellanos, J
2002-05-01
This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.
A novel single neuron perceptron with universal approximation and XOR computation properties.
Lotfi, Ehsan; Akbarzadeh-T, M-R
2014-01-01
We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain's nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
AmeriFlux US-SCw Southern California Climate Gradient - Pinyon/Juniper Woodland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goulden, Mike
This is the AmeriFlux version of the carbon flux data for the site US-SCw Southern California Climate Gradient - Pinyon/Juniper Woodland. Site Description - Half hourly data are available at https://www.ess.uci.edu/~california/. This site is one of six Southern California Climate Gradient flux towers operated along an elevation gradient (sites are US-SCg, US-SCs, US-SCf, US-SCw, US-SCc, US-SCd). This site is a Pinyon Juniper woodland with trees that are at least 150 years old, and ephemeral herbaceous cover following winter or spring rains. The site has experienced repeated drought during the record and roughly 50% Pinyon mortality over the last decade. Amore » nearby tower site (US-SCc) burned in a 1994 wildfire; comparisons between US-SCw and US-SCc provide a measure of the effects of the 1994 on land-atmosphere exchange.« less
Fusion and Gaussian mixture based classifiers for SONAR data
NASA Astrophysics Data System (ADS)
Kotari, Vikas; Chang, KC
2011-06-01
Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.
Final Technical Report for DOE Award SC0006616
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Andrew
2015-08-01
This report summarizes research carried out by the project "Collaborative Research, Type 1: Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoonal Asia. This collaborative project brought together climate dynamicists (UCLA, IRI), dendroclimatologists (LDEO Tree Ring Laboratory), computer scientists (UCI), and hydrologists (Columbia Water Center, CWC), together with applied scientists in climate risk management (IRI) to create new scientific approaches to quantify and exploit the role of climate variability and change in the growing water crisis across southern and eastern Asia. This project developed new tree-ring based streamflow reconstructions for rivers in monsoonal Asia; improved understanding of hydrologic spatio-temporal modesmore » of variability over monsoonal Asia on interannual-to-centennial time scales; assessed decadal predictability of hydrologic spatio-temporal modes; developed stochastic simulation tools for creating downscaled future climate scenarios based on historical/proxy data and GCM climate change; and developed stochastic reservoir simulation and optimization for scheduling hydropower, irrigation and navigation releases.« less
New fuzzy support vector machine for the class imbalance problem in medical datasets classification.
Gu, Xiaoqing; Ni, Tongguang; Wang, Hongyuan
2014-01-01
In medical datasets classification, support vector machine (SVM) is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM) for the class imbalance problem (called FSVM-CIP) is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
The Structure and Dynamics of the Solar Corona
NASA Technical Reports Server (NTRS)
Mikic, Zoran
2000-01-01
This report covers technical progress during the third year of the NASA Space Physics Theory contract "The Structure and Dynamics of the Solar Corona," between NASA and Science Applications International Corporation, and covers the period June 16, 1998 to August 15, 1999. This is also the final report for this contract. Under this contract SAIC, the University of California, Irvine (UCI), and the Jet Propulsion Laboratory (JPL), have conducted research into theoretical modeling of active regions, the solar corona, and the inner heliosphere, using the MHD model. During the three-year duration of this contract we have published 49 articles in the scientific literature. These publications are listed in Section 3 of this report. In the Appendix we have attached reprints of selected articles. We summarize our progress during the third year of the contract. Full descriptions of our work can be found in the cited publications, a few of which are attached to this report.
Markov Chain Monte Carlo from Lagrangian Dynamics.
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2015-04-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.
Instilling best educational practices into future physics professionals and faculty
NASA Astrophysics Data System (ADS)
Collins, Philip G.
2009-03-01
A primary aim of the New Faculty Workshop (NFW) has been to communicate best educational practices in faculty beginning their teaching careers. However, further amplification of NFW goals is achieved by providing similar content and training to Ph.D. candidates working as Teaching Assistants (TAs). NFW experience led to the successful creation at UCI of a relatively extensive, 30-hour training course now required of every graduate student in the Dept. of Physics and Astronomy. Half of the training occurs before the first week of classes, and focuses on peer instruction, active learning, and results from Physics Education Research. This orientation segues into peer evaluation as first-time TAs and soon-to-be TAs practice teaching styles for each other and evaluate videos of each other teaching their actual courses. This course directly trains 25-30 graduate students each year, indirectly affecting dozens of discussion sections and the experience of nearly 2000 students per quarter.
Chikh, Mohamed Amine; Saidi, Meryem; Settouti, Nesma
2012-10-01
The use of expert systems and artificial intelligence techniques in disease diagnosis has been increasing gradually. Artificial Immune Recognition System (AIRS) is one of the methods used in medical classification problems. AIRS2 is a more efficient version of the AIRS algorithm. In this paper, we used a modified AIRS2 called MAIRS2 where we replace the K- nearest neighbors algorithm with the fuzzy K-nearest neighbors to improve the diagnostic accuracy of diabetes diseases. The diabetes disease dataset used in our work is retrieved from UCI machine learning repository. The performances of the AIRS2 and MAIRS2 are evaluated regarding classification accuracy, sensitivity and specificity values. The highest classification accuracy obtained when applying the AIRS2 and MAIRS2 using 10-fold cross-validation was, respectively 82.69% and 89.10%.
Quantitative analysis of breast cancer diagnosis using a probabilistic modelling approach.
Liu, Shuo; Zeng, Jinshu; Gong, Huizhou; Yang, Hongqin; Zhai, Jia; Cao, Yi; Liu, Junxiu; Luo, Yuling; Li, Yuhua; Maguire, Liam; Ding, Xuemei
2018-01-01
Breast cancer is the most prevalent cancer in women in most countries of the world. Many computer-aided diagnostic methods have been proposed, but there are few studies on quantitative discovery of probabilistic dependencies among breast cancer data features and identification of the contribution of each feature to breast cancer diagnosis. This study aims to fill this void by utilizing a Bayesian network (BN) modelling approach. A K2 learning algorithm and statistical computation methods are used to construct BN structure and assess the obtained BN model. The data used in this study were collected from a clinical ultrasound dataset derived from a Chinese local hospital and a fine-needle aspiration cytology (FNAC) dataset from UCI machine learning repository. Our study suggested that, in terms of ultrasound data, cell shape is the most significant feature for breast cancer diagnosis, and the resistance index presents a strong probabilistic dependency on blood signals. With respect to FNAC data, bare nuclei are the most important discriminating feature of malignant and benign breast tumours, and uniformity of both cell size and cell shape are tightly interdependent. The BN modelling approach can support clinicians in making diagnostic decisions based on the significant features identified by the model, especially when some other features are missing for specific patients. The approach is also applicable to other healthcare data analytics and data modelling for disease diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.
3D full-Stokes modeling of the grounding line dynamics of Thwaites Glacier, West Antarctica
NASA Astrophysics Data System (ADS)
Yu, H.; Rignot, E. J.; Morlighem, M.; Seroussi, H. L.
2016-12-01
Thwaites Glacier (TG) is the broadest and second largest ice stream in the West Antarctica. Satellite observations have revealed rapid grounding line retreat and mass loss of this glacier in the past few decades, which has been attributed to the enhanced basal melting in the Amundsen Sea Embayment. With a retrograde bed configuration, TG is on the verge of collapse according to the marine ice sheet instability theory. Here, we use the UCI/JPL Ice Sheet System Model (ISSM) to simulate the grounding line position of TG to determine its stability, rate of retreat and sensitivity to enhanced basal melting using a three-dimensional full-Stokes numerical model. Simulations with simplified models (Higher Order (HO), and Shelfy-Stream Approximation (SSA)) are also conducted for comparison. We first validate our full Stokes model by conducting MISMIP3D experiments. Then we applied the model to TG using new bed elevation dataset combining IceBridge (OIB) gravity data, OIB ice thickness, ice flow vectors from interferometry and a mass conservation method at 450 m spacing. Basal friction coefficient and ice rheology of floating ice are inferred to match observed surface velocity. We find that the grounding line is capable of retreating at rate of 1km/yr under current forcing and that the glacier's sensitivity to melt is higher in the Stokes model than HO or SSA, which means that projections using SSA or HO might underestimate the future rate of retreat of the glacier. This work has been performed at UC Irvine and Caltech's Jet Propulsion Laboratory under a contract with NASA's Cryospheric Science Program.
Koralek, Thrissia; Runnerstrom, Miryha G; Brown, Brandon J; Uchegbu, Chukwuemeka; Basta, Tania B
2016-08-25
Objectives. We examined the role of outbreak information sources through four domains: knowledge, attitudes, beliefs, and stigma related to the 2014 Ebola virus disease (EVD) outbreak. Methods. We conducted an online survey of 797 undergraduates at the University of California, Irvine (UCI) and Ohio University (OU) during the peak of the outbreak. We calculated individual scores for domains and analyzed associations to demographic variables and news sources. Results. Knowledge of EVD was low and misinformation was prevalent. News media (34%) and social media (19%) were the most used sources of EVD information while official government websites (OGW) were among the least used (11%). Students who acquired information through OGW had higher knowledge, more positive attitudes towards those infected, a higher belief in the government, and were less likely to stigmatize Ebola victims. Conclusions. Information sources are likely to influence students' knowledge, attitudes, beliefs, and stigma relating to EVD. This study contains crucial insight for those tasked with risk communication to college students. Emphasis should be given to developing effective strategies to achieve a comprehensive knowledge of EVD and future public health threats.
Mission Systems Open Architecture Science and Technology (MOAST) program
NASA Astrophysics Data System (ADS)
Littlejohn, Kenneth; Rajabian-Schwart, Vahid; Kovach, Nicholas; Satterthwaite, Charles P.
2017-04-01
The Mission Systems Open Architecture Science and Technology (MOAST) program is an AFRL effort that is developing and demonstrating Open System Architecture (OSA) component prototypes, along with methods and tools, to strategically evolve current OSA standards and technical approaches, promote affordable capability evolution, reduce integration risk, and address emerging challenges [1]. Within the context of open architectures, the program is conducting advanced research and concept development in the following areas: (1) Evolution of standards; (2) Cyber-Resiliency; (3) Emerging Concepts and Technologies; (4) Risk Reduction Studies and Experimentation; and (5) Advanced Technology Demonstrations. Current research includes the development of methods, tools, and techniques to characterize the performance of OMS data interconnection methods for representative mission system applications. Of particular interest are the OMS Critical Abstraction Layer (CAL), the Avionics Service Bus (ASB), and the Bulk Data Transfer interconnects, as well as to develop and demonstrate cybersecurity countermeasures techniques to detect and mitigate cyberattacks against open architecture based mission systems and ensure continued mission operations. Focus is on cybersecurity techniques that augment traditional cybersecurity controls and those currently defined within the Open Mission System and UCI standards. AFRL is also developing code generation tools and simulation tools to support evaluation and experimentation of OSA-compliant implementations.
An immune-inspired semi-supervised algorithm for breast cancer diagnosis.
Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong
2016-10-01
Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fuzzy method for pre-diagnosis of breast cancer from the Fine Needle Aspirate analysis
2012-01-01
Background Across the globe, breast cancer is one of the leading causes of death among women and, currently, Fine Needle Aspirate (FNA) with visual interpretation is the easiest and fastest biopsy technique for the diagnosis of this deadly disease. Unfortunately, the ability of this method to diagnose cancer correctly when the disease is present varies greatly, from 65% to 98%. This article introduces a method to assist in the diagnosis and second opinion of breast cancer from the analysis of descriptors extracted from smears of breast mass obtained by FNA, with the use of computational intelligence resources - in this case, fuzzy logic. Methods For data acquisition of FNA, the Wisconsin Diagnostic Breast Cancer Data (WDBC), from the University of California at Irvine (UCI) Machine Learning Repository, available on the internet through the UCI domain was used. The knowledge acquisition process was carried out by the extraction and analysis of numerical data of the WDBC and by interviews and discussions with medical experts. The PDM-FNA-Fuzzy was developed in four steps: 1) Fuzzification Stage; 2) Rules Base; 3) Inference Stage; and 4) Defuzzification Stage. Performance cross-validation was used in the tests, with three databases with gold pattern clinical cases randomly extracted from the WDBC. The final validation was held by medical specialists in pathology, mastology and general practice, and with gold pattern clinical cases, i.e. with known and clinically confirmed diagnosis. Results The Fuzzy Method developed provides breast cancer pre-diagnosis with 98.59% sensitivity (correct pre-diagnosis of malignancies); and 85.43% specificity (correct pre-diagnosis of benign cases). Due to the high sensitivity presented, these results are considered satisfactory, both by the opinion of medical specialists in the aforementioned areas and by comparison with other studies involving breast cancer diagnosis using FNA. Conclusions This paper presents an intelligent method to assist in the diagnosis and second opinion of breast cancer, using a fuzzy method capable of processing and sorting data extracted from smears of breast mass obtained by FNA, with satisfactory levels of sensitivity and specificity. The main contribution of the proposed method is the reduction of the variation hit of malignant cases when compared to visual interpretation currently applied in the diagnosis by FNA. While the MPD-FNA-Fuzzy features stable sensitivity at 98.59%, visual interpretation diagnosis provides a sensitivity variation from 65% to 98% (this track showing sensitivity levels below those considered satisfactory by medical specialists). Note that this method will be used in an Intelligent Virtual Environment to assist the decision-making (IVEMI), which amplifies its contribution. PMID:23122391
Efficient Discovery of De-identification Policies Through a Risk-Utility Frontier
Xia, Weiyi; Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley
2014-01-01
Modern information technologies enable organizations to capture large quantities of person-specific data while providing routine services. Many organizations hope, or are legally required, to share such data for secondary purposes (e.g., validation of research findings) in a de-identified manner. In previous work, it was shown de-identification policy alternatives could be modeled on a lattice, which could be searched for policies that met a prespecified risk threshold (e.g., likelihood of re-identification). However, the search was limited in several ways. First, its definition of utility was syntactic - based on the level of the lattice - and not semantic - based on the actual changes induced in the resulting data. Second, the threshold may not be known in advance. The goal of this work is to build the optimal set of policies that trade-off between privacy risk (R) and utility (U), which we refer to as a R-U frontier. To model this problem, we introduce a semantic definition of utility, based on information theory, that is compatible with the lattice representation of policies. To solve the problem, we initially build a set of policies that define a frontier. We then use a probability-guided heuristic to search the lattice for policies likely to update the frontier. To demonstrate the effectiveness of our approach, we perform an empirical analysis with the Adult dataset of the UCI Machine Learning Repository. We show that our approach can construct a frontier closer to optimal than competitive approaches by searching a smaller number of policies. In addition, we show that a frequently followed de-identification policy (i.e., the Safe Harbor standard of the HIPAA Privacy Rule) is suboptimal in comparison to the frontier discovered by our approach. PMID:25520961
Efficient Discovery of De-identification Policies Through a Risk-Utility Frontier.
Xia, Weiyi; Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley
2013-01-01
Modern information technologies enable organizations to capture large quantities of person-specific data while providing routine services. Many organizations hope, or are legally required, to share such data for secondary purposes (e.g., validation of research findings) in a de-identified manner. In previous work, it was shown de-identification policy alternatives could be modeled on a lattice, which could be searched for policies that met a prespecified risk threshold (e.g., likelihood of re-identification). However, the search was limited in several ways. First, its definition of utility was syntactic - based on the level of the lattice - and not semantic - based on the actual changes induced in the resulting data. Second, the threshold may not be known in advance. The goal of this work is to build the optimal set of policies that trade-off between privacy risk (R) and utility (U), which we refer to as a R-U frontier. To model this problem, we introduce a semantic definition of utility, based on information theory, that is compatible with the lattice representation of policies. To solve the problem, we initially build a set of policies that define a frontier. We then use a probability-guided heuristic to search the lattice for policies likely to update the frontier. To demonstrate the effectiveness of our approach, we perform an empirical analysis with the Adult dataset of the UCI Machine Learning Repository. We show that our approach can construct a frontier closer to optimal than competitive approaches by searching a smaller number of policies. In addition, we show that a frequently followed de-identification policy (i.e., the Safe Harbor standard of the HIPAA Privacy Rule) is suboptimal in comparison to the frontier discovered by our approach.
GDPC: Gravitation-based Density Peaks Clustering algorithm
NASA Astrophysics Data System (ADS)
Jiang, Jianhua; Hao, Dehao; Chen, Yujun; Parmar, Milan; Li, Keqin
2018-07-01
The Density Peaks Clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach, and it is published in Science in 2014. The DPC has advantages of discovering clusters with varying sizes and varying densities, but has some limitations of detecting the number of clusters and identifying anomalies. We develop an enhanced algorithm with an alternative decision graph based on gravitation theory and nearby distance to identify centroids and anomalies accurately. We apply our method to some UCI and synthetic data sets. We report comparative clustering performances using F-Measure and 2-dimensional vision. We also compare our method to other clustering algorithms, such as K-Means, Affinity Propagation (AP) and DPC. We present F-Measure scores and clustering accuracies of our GDPC algorithm compared to K-Means, AP and DPC on different data sets. We show that the GDPC has the superior performance in its capability of: (1) detecting the number of clusters obviously; (2) aggregating clusters with varying sizes, varying densities efficiently; (3) identifying anomalies accurately.
Various forms of indexing HDMR for modelling multivariate classification problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Çağrı; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less
NASA Astrophysics Data System (ADS)
FitzGerald, Jack G. M.
2015-02-01
The Rotating Scatter Mask (RSM) system is an inexpensive retrofit that provides imaging capabilities to scintillating detectors. Unlike traditional collimator systems that primarily absorb photons in order to form an image, this system primarily scatters the photons. Over a single rotation, there is a unique, smooth response curve for each defined source position. Testing was conducted using MCNPX simulations. Image reconstruction was performed using a chi-squared reconstruction technique. A simulated 100 uCi, Cs-137 source at 10 meters was detected after a single, 50-second rotation when a uniform terrestrial background was present. A Cs-137 extended source was also tested. The RSM field-of-view is 360 degrees azimuthally as well as 54 degrees above and 54 degrees below the horizontal plane. Since the RSM is built from polyethylene, the overall cost and weight of the system is low. The system was designed to search for lost or stolen radioactive material, also known as the orphan source problem.
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
Ultrafast multiphoton ionization dynamics and control of NaK molecules
NASA Astrophysics Data System (ADS)
Davidsson, Jan; Hansson, Tony; Mukhtar, Emad
1998-12-01
The multiphoton ionization dynamics of NaK molecules is investigated experimentally using one-color pump-probe femtosecond spectroscopy at 795 nm and intermediate laser field strengths (about 10 GW/cm2). Both NaK+ and Na+ ions are detected as a function of pulse separation time, pulse intensities, and strong pulse-weak pulse order. To aid in the analysis, the potential energy curves of the two lowest electronic states of NaK+ and the electronic transition dipole moment between them are calculated by the GAUSSIAN94 UCIS method. Different ionization pathways are identified by Franck-Condon analysis, and vibrational dynamics in the A 1Σ+ and 3 1Π states, as well as in the ground state, is observed. Further, the existence of a highly excited (above the adiabatic ionization limit) neutral state of NaK is proposed. By changing the strong pulse-weak pulse order of the pulses, the ionization pathways for production of both ions can be varied and thus controlled.
Privacy-preserving clinical decision support system using Gaussian kernel-based classification.
Rahulamathavan, Yogachandran; Veluru, Suresh; Phan, Raphael C-W; Chambers, Jonathon A; Rajarajan, Muttukrishnan
2014-01-01
A clinical decision support system forms a critical capability to link health observations with health knowledge to influence choices by clinicians for improved healthcare. Recent trends toward remote outsourcing can be exploited to provide efficient and accurate clinical decision support in healthcare. In this scenario, clinicians can use the health knowledge located in remote servers via the Internet to diagnose their patients. However, the fact that these servers are third party and therefore potentially not fully trusted raises possible privacy concerns. In this paper, we propose a novel privacy-preserving protocol for a clinical decision support system where the patients' data always remain in an encrypted form during the diagnosis process. Hence, the server involved in the diagnosis process is not able to learn any extra knowledge about the patient's data and results. Our experimental results on popular medical datasets from UCI-database demonstrate that the accuracy of the proposed protocol is up to 97.21% and the privacy of patient data is not compromised.
Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio
NASA Astrophysics Data System (ADS)
Nababan, A. A.; Sitompul, O. S.; Tulus
2018-04-01
K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.
Measurements of Classical Transport of Fast Ions in the LAPD
NASA Astrophysics Data System (ADS)
Zhao, L.; Boehmer, H.; Edrich, D.; Heidbrink, W. W.; McWilliams, R.; Zimmerman, D.; Lenenman, D.; Vincena, S.
2004-11-01
To study fast ion transport in a well controlled background plasma, a 3cm diameter RF ion gun launches a pulsed, 400 eV ribbon shape argon ion beam in the LArge Plasma Device (LAPD) at UCLA. The beam velocity distribution is calibrated by Laser Induced Fluorescence (LIF) on the Mirror of UCI and the beam energy is also measured by a two-grid energy analyzer at different axial locations (z=0.3-6.0 m) from the source on LAPD. Slowing down of the ion beam is observed when the beam is launched parallel or at 15 degrees to the 0.85 kG magnetic field. Using Langmuir probe measurements of the plasma parameters, the observed energy deceleration rate is consistent with classical Coulomb scattering theory. The radial beam profile is also measured by the energy analyzer when the beam is launched at 15 degrees to the magnetic field. The beam follows the expected helical trajectory and its contour has the shape predicted by Monte Carlo simulations. The diffusion measurements are performed at different axial locations where the ion beam has the same gyro-phase to eliminate the peristaltic effect. The spatial spreading of the beam is compared with classical scattering and neutral scattering theory.
Crystal field parameters in UCI 4: Experiment versus theory
NASA Astrophysics Data System (ADS)
Zolnierek, Z.; Gajek, Z.; Malek, Ch. Khan
1984-08-01
Crystal field effect on U 4+ ion with the 3H 4 ground term in tetragonal ligand field of UCl 4 has been studied in detail. Crystal field parameters determined experimentally from optical spectroscopy and magnetic susceptibility are in good agreement with CFP sets derived from the modified point charge model and the ab initio method. Theoretical calculations lead to overestimating the A44< r4> and lowering the A02< r2> values in comparison to those found in the experiments. The discrepancies are, however, within an accuracy of calculations. A large reduction of expectation values of the magnetic moment operator for the eigenvectors of lowest CF levels (17.8%), determined from magnetic susceptibility, cannot be attributed to the overlap and covalency effects only. The detailed calculations have shown that the latter effects provide about 4.6% reduction of respective matrix elements, and the applied J-J mixing procedure increases this factor up to 6.5%. Since similar, as in UCl 4, reduction factor(≈15%) has already been observed in a number of different uranium compounds, it seems likely that this feature is involved in the intrinsic properties of the U 4+ ion. We endeavor to explain this effect in terms of configuration interaction mechanisms.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Near Shore Wave Modeling and applications to wave energy estimation
NASA Astrophysics Data System (ADS)
Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.
2012-04-01
The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.
Alvarez-Lerma, F; Gracia-Arnillas, M P; Palomar, M; Olaechea, P; Insausti, J; López-Pueyo, M J; Otal, J J; Gimeno, R; Seijas, I
2013-03-01
To describe trends in national catheter-related urinary tract infection (CRUTI) rates, as well as etiologies and multiresistance markers. An observational, prospective, multicenter voluntary participation study was conducted from 1 April to 30 June in the period between 2005 and 2010. Intensive Care Units (ICUs) that participated in the ENVIN-ICU registry during the study period. We included all patients admitted to the participating ICUs and patients with urinary catheter placement for more than 24 hours (78,863 patients). Patient monitoring was continued until discharge from the ICU or up to 60 days. CRUTIs were defined according to the CDC system, and frequency is expressed as incidence density (ID) in relation to the number of urinary catheter-patients days. A total of 2329 patients (2.95%) developed one or more CRUTI. The ID decreased from 6.69 to 4.18 episodes per 1000 days of urinary catheter between 2005 and 2010 (p<0.001). In relation to the underlying etiology, gramnegative bacilli predominated (55.6 to 61.6%), followed by fungi (18.7 to 25.2%) and grampositive cocci (17.1 to 25.9%). In 2010, ciprofloxacin-resistant E. coli strains (37.1%) increased, as well as imipenem-resistant (36.4%) and ciprofloxacin-resistant (37.1%) strains of P. aeruginosa. A decrease was observed in CRUTI rates, maintaining the same etiological distribution and showing increased resistances in gramnegative pathogens, especially E. coli and P. aeruginosa. Copyright © 2011 Elsevier España, S.L. and SEMICYUC. All rights reserved.
[Nurses' perception, experience and knowledge of palliative care in intensive care units].
Piedrafita-Susín, A B; Yoldi-Arzoz, E; Sánchez-Fernández, M; Zuazua-Ros, E; Vázquez-Calatayud, M
2015-01-01
Adequate provision of palliative care by nursing in intensive care units is essential to facilitate a "good death" to critically ill patients. To determine the perceptions, experiences and knowledge of intensive care nurses in caring for terminal patients. A literature review was conducted on the bases of Pubmed, Cinahl and PsicINFO data using as search terms: cuidados paliativos, UCI, percepciones, experiencias, conocimientos y enfermería and their alternatives in English (palliative care, ICU, perceptions, experiences, knowledge and nursing), and combined with AND and OR Boolean. Also, 3 journals in intensive care were reviewed. Twenty seven articles for review were selected, most of them qualitative studies (n=16). After analysis of the literature it has been identified that even though nurses perceive the need to respect the dignity of the patient, to provide care aimed to comfort and to encourage the inclusion of the family in patient care, there is a lack of knowledge of the end of life care in intensive care units' nurses. This review reveals that to achieve quality care at the end of life, is necessary to encourage the training of nurses in palliative care and foster their emotional support, to conduct an effective multidisciplinary work and the inclusion of nurses in decision making. Copyright © 2014 Elsevier España, S.L.U. y SEEIUC. All rights reserved.
A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters
Wang, Zhihao; Yi, Jing
2016-01-01
For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
The relation between umbilical cord characteristics and the outcome of external cephalic version.
Kuppens, Simone M I; Waerenburgh, Evelyne R; Kooistra, Libbe; van der Donk, Riet W P; Hasaart, Tom H M; Pop, Victor J M
2011-05-01
Umbilical cords of fetuses in breech presentation differ in length and coiling from their cephalic counterparts and it might be hypothesised that these cord characteristics may in turn affect ECV outcome. To investigate the relation between umbilical cord characteristics and the outcome of external cephalic version (ECV). Prospective cohort study. Women (>35 weeks gestation) with a singleton fetus in breech presentation, suitable for external cephalic version. Demographic, lifestyle and obstetrical parameters were assessed at intake. ECV success was based on cephalic presentation on ultrasound post-ECV. Umbilical cord length (UCL) and umbilical coiling index (UCI) were measured after birth. The relation between umbilical cord characteristics (cord length and coiling) and the success of external cephalic version. ECV success rate was overall 79/146 (54%), for multiparas 37/46(80%) and for nulliparas 42/100 (42%). Multiple logistic regression showed that UCL (OR: 1.04, CI: 1.01-1.07), nulliparity (OR: 0.20, CI: 0.08-0.51), frank breech (OR: 0.37, 95% CI: 0.15-0.90), body mass index (OR: 0.85, CI: 0.76-0.95), placenta anterior (OR: 0.27, CI: 0.12-0.63) and birth weight (OR: 1.002, CI: 1.001-1.003) were all independently related to ECV success. Umbilical cord length is independently related to the outcome of ECV, whereas umbilical coiling index is not. Copyright © 2011 Elsevier Ltd. All rights reserved.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
An effective fuzzy kernel clustering analysis approach for gene expression data.
Sun, Lin; Xu, Jiucheng; Yin, Jiaojiao
2015-01-01
Fuzzy clustering is an important tool for analyzing microarray data. A major problem in applying fuzzy clustering method to microarray gene expression data is the choice of parameters with cluster number and centers. This paper proposes a new approach to fuzzy kernel clustering analysis (FKCA) that identifies desired cluster number and obtains more steady results for gene expression data. First of all, to optimize characteristic differences and estimate optimal cluster number, Gaussian kernel function is introduced to improve spectrum analysis method (SAM). By combining subtractive clustering with max-min distance mean, maximum distance method (MDM) is proposed to determine cluster centers. Then, the corresponding steps of improved SAM (ISAM) and MDM are given respectively, whose superiority and stability are illustrated through performing experimental comparisons on gene expression data. Finally, by introducing ISAM and MDM into FKCA, an effective improved FKCA algorithm is proposed. Experimental results from public gene expression data and UCI database show that the proposed algorithms are feasible for cluster analysis, and the clustering accuracy is higher than the other related clustering algorithms.
Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree.
Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-Koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid
2016-10-01
Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. The proposed model had an accuracy of 94.84% (. 24.42) in order to correct prediction of the ESD disease. Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD.
Static versus dynamic sampling for data mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
John, G.H.; Langley, P.
1996-12-31
As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less
The construction of support vector machine classifier using the firefly algorithm.
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.
The Construction of Support Vector Machine Classifier Using the Firefly Algorithm
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511
A hybrid feature selection method using multiclass SVM for diagnosis of erythemato-squamous disease
NASA Astrophysics Data System (ADS)
Maryam, Setiawan, Noor Akhmad; Wahyunggoro, Oyas
2017-08-01
The diagnosis of erythemato-squamous disease is a complex problem and difficult to detect in dermatology. Besides that, it is a major cause of skin cancer. Data mining implementation in the medical field helps expert to diagnose precisely, accurately, and inexpensively. In this research, we use data mining technique to developed a diagnosis model based on multiclass SVM with a novel hybrid feature selection method to diagnose erythemato-squamous disease. Our hybrid feature selection method, named ChiGA (Chi Square and Genetic Algorithm), uses the advantages from filter and wrapper methods to select the optimal feature subset from original feature. Chi square used as filter method to remove redundant features and GA as wrapper method to select the ideal feature subset with SVM used as classifier. Experiment performed with 10 fold cross validation on erythemato-squamous diseases dataset taken from University of California Irvine (UCI) machine learning database. The experimental result shows that the proposed model based multiclass SVM with Chi Square and GA can give an optimum feature subset. There are 18 optimum features with 99.18% accuracy.
Gorzalczany, Marian B; Rudzinski, Filip
2017-06-07
This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.
1990-01-01
w ui 3j W3A b a w ww w w u i U j L A u a w Wa 4*to a o a N C4 anL (a "Cc cm em an a aD a A a GOO F6 46 404 S. >0 I.J man com 04 R 12 CM e4cc ~ m I...I WC: I co r aCCa 001 I-I 0. 0 NCJ o #1.U0 1-- 44 4 -4W C )C (JN -4~y 001 4 4 1I~ C’I1- mmcoco 44 I I. 4 C . a I- U. >- -4.- 010 00 - 1 . Za -C 4 vm...6W1-6z x. z-z- U~.L 4c -C I.--.4- z~L L W~mm 09~ ac -4- U..CI -I io ACLZ 0 U 3 0 0 (0 62 0 z I. zzz (A 6 000 Csd 314" 6 6 CA (ft 4.WU U . U A F4 F6 I I
Olaechea, P M; Álvarez-Lerma, F; Palomar, M; Gimeno, R; Gracia, M P; Mas, N; Rivas, R; Seijas, I; Nuvials, X; Catalán, M
2016-05-01
To describe the case-mix of patients admitted to intensive care units (ICUs) in Spain during the period 2006-2011 and to assess changes in ICU mortality according to severity level. Secondary analysis of data obtained from the ENVN-HELICS registry. Observational prospective study. Spanish ICU. Patients admitted for over 24h. None. Data for each of the participating hospitals and ICUs were recorded, as well as data that allowed to knowing the case-mix and the individual outcome of each patient. The study period was divided into two intervals, from 2006 to 2008 (period 1) and from 2009 to 2011 (period 2). Multilevel and multivariate models were used for the analysis of mortality and were performed in each stratum of severity level. The study population included 142,859 patients admitted to 188 adult ICUs. There was an increase in the mean age of the patients and in the percentage of patients >79 years (11.2% vs. 12.7%, P<0.001). Also, the mean APACHE II score increased from 14.35±8.29 to 14.72±8.43 (P<0.001). The crude overall intra-UCI mortality remained unchanged (11.4%) but adjusted mortality rate in patients with APACHE II score between 11 and 25 decreased modestly in recent years (12.3% vs. 11.6%, odds ratio=0.931, 95% CI 0.883-0.982; P=0.008). This study provides observational longitudinal data on case-mix of patients admitted to Spanish ICUs. A slight reduction in ICU mortality rate was observed among patients with intermediate severity level. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.
Observed and Simulated Urban Heat Island and Urban Cool Island in Las Vegas
NASA Astrophysics Data System (ADS)
Sauceda, Daniel O.
This research investigates the urban climate of Las Vegas and establishes long-term trends relative to the regional climate in an attempt to identify climate disturbances strictly related to urban growth. An experimental surface station network (DRI-UHI) of low-cost surface temperature (T2m) and relative humidity (RH) sensors were designed to cover under-sampled low-intensity residential urban areas, as well as complement the in-city and surrounding rural areas. In addition to the analysis of the surface station data, high-resolution gridded data products (GDPs) from Daymet (1km) and PRISM (800 m) and results from numerical simulations were used to further characterize the Las Vegas climate trends. The Weather Research and Forecasting (WRF) model was coupled with three different models: the Noah Land Surface Model (LSM) and a single- and multi-layer urban canopy model (UCM) to assess the urban related climate disturbances; as well as the model sensitivity and ability to characterize diurnal variability and rural/urban thermal contrasts. The simulations consisted of 1 km grid size for five, one month-long hindcast simulations during November of 2012: (i) using the Noah LSM without UCM treatment, (ii) same as (i) with a single-layer UCM (UCM1), (iii) same as (i) with a multi-layer UCM (UCM2), (iv) removing the City of Las Vegas (NC) and replacing it with predominant land cover (shrub), and (v) same as (ii) with increasing the albedo of rooftops from 0.20 to 0.65 as a potential adaptation scenario known as "white roofing". T2m long-term trends showed a regional warming of minimum temperatures (Tmin) and negligible trends in maximum temperatures (Tmax ). By isolating the regional temperature trends, an observed urban heat island (UHI) of ~1.63°C was identified as well as a daytime urban cool island (UCI) of ~0.15°C. GDPs agree with temperature trends but tend to underpredict UHI intensity by ~1.05°C. The WRF-UCM showed strong correlations with observed T2m (0.85 < rho < 0.95) and vapor pressure (ea ; 0.83 < rho < 0.88), and moderate-to-strong correlations for RH (0.64 < rho < 0.81) at the 95% confidence level. UCM1 shows the best skill and adequately simulates most of the UHI and UCI observed characteristics. Differences of LSM, UCM1, and UCM2 minus NC show simulated effects of warmer in-city Tmin for LSM and UCM2, and cooler in-city Tmax for UCM1 and UCM2. Finally, the white roofing scenario for Las Vegas was not found to significantly impact the UHI effect but has the potential to reduce daytime temperature by 1°-2°C.
Chen, Zewei; Zhang, Xin; Zhang, Zhuoyong
2016-12-01
Timely risk assessment of chronic kidney disease (CKD) and proper community-based CKD monitoring are important to prevent patients with potential risk from further kidney injuries. As many symptoms are associated with the progressive development of CKD, evaluating risk of CKD through a set of clinical data of symptoms coupled with multivariate models can be considered as an available method for prevention of CKD and would be useful for community-based CKD monitoring. Three common used multivariate models, i.e., K-nearest neighbor (KNN), support vector machine (SVM), and soft independent modeling of class analogy (SIMCA), were used to evaluate risk of 386 patients based on a series of clinical data taken from UCI machine learning repository. Different types of composite data, in which proportional disturbances were added to simulate measurement deviations caused by environment and instrument noises, were also utilized to evaluate the feasibility and robustness of these models in risk assessment of CKD. For the original data set, three mentioned multivariate models can differentiate patients with CKD and non-CKD with the overall accuracies over 93 %. KNN and SVM have better performances than SIMCA has in this study. For the composite data set, SVM model has the best ability to tolerate noise disturbance and thus are more robust than the other two models. Using clinical data set on symptoms coupled with multivariate models has been proved to be feasible approach for assessment of patient with potential CKD risk. SVM model can be used as useful and robust tool in this study.
NASA’s Hubble Telescope Finds Potential Kuiper Belt Targets for New Horizons Pluto Mission
2017-12-08
This is an artist’s impression of a Kuiper Belt object (KBO), located on the outer rim of our solar system at a staggering distance of 4 billion miles from the Sun. A HST survey uncovered three KBOs that are potentially reachable by NASA’s New Horizons spacecraft after it passes by Pluto in mid-2015. Credit: NASA, ESA, and G. Bacon (STScI) --- Peering out to the dim, outer reaches of our solar system, NASA’s Hubble Space Telescope has uncovered three Kuiper Belt objects (KBOs) the agency’s New Horizons spacecraft could potentially visit after it flies by Pluto in July 2015. The KBOs were detected through a dedicated Hubble observing program by a New Horizons search team that was awarded telescope time for this purpose. “This has been a very challenging search and it’s great that in the end Hubble could accomplish a detection – one NASA mission helping another,” said Alan Stern of the Southwest Research Institute (SwRI) in Boulder, Colorado, principal investigator of the New Horizons mission. The Kuiper Belt is a vast rim of primordial debris encircling our solar system. KBOs belong to a unique class of solar system objects that has never been visited by spacecraft and which contain clues to the origin of our solar system. The KBOs Hubble found are each about 10 times larger than typical comets, but only about 1-2 percent of the size of Pluto. Unlike asteroids, KBOs have not been heated by the sun and are thought to represent a pristine, well preserved deep-freeze sample of what the outer solar system was like following its birth 4.6 billion years ago. The KBOs found in the Hubble data are thought to be the building blocks of dwarf planets such as Pluto. Read more: 1.usa.gov/1vzUcyK NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Jamaluddin; Siringoringo, Rimbun
2017-12-01
Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%
Hernández-Tejedor, Alberto; Cabré-Pericas, Lluís; Martín-Delgado, María Cruz; Leal-Micharet, Ana María; Algora-Weber, Alejandro
2015-06-01
The prognosis of a patient who deteriorates during a prolonged intensive care unit (ICU) stay is difficult to predict. We analyze the prognostic value of the serialized Sequential Organ Failure Assessment (SOFA) score and other variables in the early days after a complication and to build a new predictive score. EPIPUSE (Evolución y pronóstico de los pacientes con ingreso prolongado en UCI que sufren un empeoramiento, Evolution and prognosis of long intensive care unit stay patients suffering a deterioration) study is a prospective, observational study during a 3-month recruitment period in 75 Spanish ICUs. We focused on patients admitted in the ICU for 7 days or more with complications of adverse events that involve organ dysfunction impairment. Demographics, clinical variables, and serialized SOFA after a supervening clinical deterioration were recorded. Univariate and multivariate analyses were performed, and a predictive model was created with the most discriminating variables. We included 589 patients who experienced 777 cases of severe complication or adverse event. The entire sample was randomly divided into 2 subsamples, one for development purposes (528 cases) and the other for validation (249 cases). The predictive model maximizing specificity is calculated by minimum SOFA + 2 * cardiovascular risk factors + 2 * history of any oncologic disease or immunosuppressive treatment + 3 * dependence for basic activities of daily living. The area under the receiver operating characteristic curve is 0.82. A 14-point cutoff has a positive predictive value of 100% (92.7%-100%) and negative predictive value of 51% (46.4%-55.5%) for death. EPIPUSE model can predict mortality with a specificity and positive predictive value of 99% in some groups of patients. Copyright © 2015 Elsevier Inc. All rights reserved.
Jover-Sancho, C; Romero-García, M; Delgado-Hito, P; de la Cueva-Ariza, L; Solà-Solé, N; Acosta-Mejuto, B; Ricart-Basagaña, M T; Solà-Ribó, M; Juandó-Prats, C L
2015-01-01
Explore convergences and divergences between perception of nurses and of critically ill patients, in relation to the satisfactory care given and received. It is part of a larger qualitative study, according to the Grounded Theory. Carried out in 3 intensive care units with 34 boxes. Sampling theoretical profiles with n=19 patients and n=7 nurses after data saturation. Recruitment of patients included in the profiles of elderly and long-stay got stretched over some time due to the low incidence of cases. Data collection consisted of: in-depth interview to critically ill patients, group discussion of expert nurses in the critical care patient and field diary. Analysis themed on Grounded Theory according Strauss and Corbin: open coding, axial and selective. Analysis followed criteria of Guba and Lincoln rigor, Calderón quality and Gastaldo and McKeever ethical reflexivity. There was a favorable report from the ethical committee of the Hospital and informed consent of the participants. Four matching categories were found: professional skills, human, technical and continued care. Combination of these elements creates feelings of security, calmness and feeling like a person, allowing the patient a close and trusting relationship with the nurse who takes individualized care. Not divergent categories were found. Perceptions of nurses in relation to care match perceptions of critically ill patients in both the definition and dimensions upon satisfactory care. Copyright © 2014 Elsevier España, S.L.U. y SEEIUC. All rights reserved.
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree
Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid
2016-01-01
Introduction: Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. Objective: we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. Methods: we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. Results: The proposed model had an accuracy of 94.84% ( Standard Deviation: 24.42) in order to correct prediction of the ESD disease. Conclusions: Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD. PMID:28077889
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
An extension of the receiver operating characteristic curve and AUC-optimal classification.
Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto
2012-10-01
While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.
Classification as clustering: a Pareto cooperative-competitive GP approach.
McIntyre, Andrew R; Heywood, Malcolm I
2011-01-01
Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prather, Michael J.; Hsu, Juno; Nicolau, Alex
Atmospheric chemistry controls the abundances and hence climate forcing of important greenhouse gases including N 2O, CH 4, HFCs, CFCs, and O 3. Attributing climate change to human activities requires, at a minimum, accurate models of the chemistry and circulation of the atmosphere that relate emissions to abundances. This DOE-funded research provided realistic, yet computationally optimized and affordable, photochemical modules to the Community Earth System Model (CESM) that augment the CESM capability to explore the uncertainty in future stratospheric-tropospheric ozone, stratospheric circulation, and thus the lifetimes of chemically controlled greenhouse gases from climate simulations. To this end, we have successfullymore » implemented Fast-J (radiation algorithm determining key chemical photolysis rates) and Linoz v3.0 (linearized photochemistry for interactive O 3, N 2O, NO y and CH 4) packages in LLNL-CESM and for the first time demonstrated how change in O2 photolysis rate within its uncertainty range can significantly impact on the stratospheric climate and ozone abundances. From the UCI side, this proposal also helped LLNL develop a CAM-Superfast Chemistry model that was implemented for the IPCC AR5 and contributed chemical-climate simulations to CMIP5.« less
A new optimized GA-RBF neural network algorithm.
Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.
Allington, James; Spencer, Steven J; Klein, Julius; Buell, Meghan; Reinkensmeyer, David J; Bobrow, James
2011-01-01
The robot described in this paper, SUE (Supinator Extender), adds forearm/wrist rehabilitation functionality to the UCI BONES exoskeleton robot and to the ArmeoSpring rehabilitation device. SUE is a 2-DOF serial chain that can measure and assist forearm supination-pronation and wrist flexion-extension. The large power to weight ratio of pneumatic actuators allows SUE to achieve the forces needed for rehabilitation therapy while remaining lightweight enough to be carried by BONES and ArmeoSpring. Each degree of freedom has a range of 90 degrees, and a nominal torque of 2 ft-lbs. The cylinders are mounted away from the patient's body on the lateral aspect of the arm. This is to prevent the danger of a collision and maximize the workspace of the arm robot. The rotation axis used for supination-pronation is a small bearing just below the subject's wrist. The flexion-extension motion is actuated by a cantilevered pneumatic cylinder, which allows the palm of the hand to remain open. Data are presented that demonstrate the ability of SUE to measure and cancel forearm/wrist passive tone, thereby extending the active range of motion for people with stroke.
Lee, Abraham; Wirtanen, Erik
2012-07-01
The growth of biomedical engineering at The Henry Samueli School of Engineering at the University of California, Irvine (UCI) has been rapid since the Center for Biomedical Engineering was first formed in 1998 [and was later renamed as the Department of Biomedical Engineering (BME) in 2002]. Our current mission statement, “Inspiring Engineering Minds to Advance Human Health,” serves as a reminder of why we exist, what we do, and the core principles that we value and by which we abide. BME exists to advance the state of human health via engineering innovation and practices. To attain our goal, we are empowering our faculty to inspire and mobilize our students to address health problems. We treasure the human being, particularly the human mind and health. We believe that BME is where minds are nurtured, challenged, and disciplined, and it is also where the health of the human is held as a core mission value that deserves our utmost priority (Figure 1). Advancing human health is not a theoretical practice; it requires bridging between disciplines (engineering and medicine) and between communities (academic and industry).
Ekerete, P P
1997-01-01
The Expanded Programme on Immunization (EPI) (changed to National Programme on Immunization (NPI) in 1996) and Oral Rehydration Therapy (ORT) were launched in Nigeria in 1979. The goal of EPI was Universal Childhood Immunization (UCI) 1990, that is, to vaccinate 80% of all children age 0-2 years by 1990, and 80% of all pregnant women were also expected to be vaccinated with Tetanus Toxoid Vaccine. The Oral Rehydration Therapy was designed to teach parents with children age 0-5 years how to prepare and use a salt-sugar solution to rehydrate children dehydrated by diarrhoea. Nigeria set up Partners-in-Health to mobilize and motivate mothers to accept the programme. In 1990 a National coverage survey was conducted to assess the level of attainment. The results show that some states were able to reach the target and some were not. It therefore became necessary to evaluate the contribution of those promotional elements adopted by Partners-in-Health to motivate mothers to accept the programme. The respondents were therefore asked to state the degree to which these elements motivated them to accept the programme. The data were collected and processed through a Likert rating scale and t-test procedure for test of significance between two sample means. The study revealed that some elements motivated mothers very strongly, others strongly, and most moderately or low, with health workers as major sources of motivation. The study also revealed that health workers alone can not sufficiently motivate mothers without the help of religious leaders, traditional leaders and mass media, etc. It was therefore recommended that health workers should be intensively used along with other promotional elements to promote the NPI/ORT programme in Nigeria.
NASA Astrophysics Data System (ADS)
Nguyen, P.; Sorooshian, S.; Hsu, K. L.; Gao, X.; AghaKouchak, A.; Braithwaite, D.; Thorstensen, A. R.; Ashouri, H.; Tran, H.; Huynh, P.; Palacios, T.
2016-12-01
Center for Hydrometeorology and Remote Sensing (CHRS), University of California, Irvine has recently developed the CHRS RainSphere (hosted at http://rainsphere.eng.uci.edu) for scientific studies and applications using the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks - Climate Data Record (PERSIANN-CDR, Ashouri et al. 2015). PERSIANN-CDR is a long-term (33+ years) high-resolution (daily, 0.25 degree) global satellite precipitation dataset which is useful for climatological studies and water resources applications. CHRS RainSphere has functionalities allowing users to visualize and query spatiotemporal statistics of global daily satellite precipitation for the past three decades. With a couple of mouse-clicks, users can easily obtain a report of time series, spatial plots, and basic trend analysis of rainfall for various spatial domains of interest such as location, watershed, basin, political division and country for yearly, monthly, monthly by year or daily. Mann-Kendall test is implemented on CHRS RainSphere for statistically investigating whether there is a significant increasing/decreasing rainfall trend at a location or over a specific spatial domain. CHRS RainSphere has a range of capabilities and should appeal to a broad spectrum of users including climate scientists, water resources managers and planners, and engineers. CHRS RainSphere can also be a useful educational tool for the general public to investigate climate change and variability. The video tutorial on CHRS RainSphere is available at https://www.youtube.com/watch?v=eI2-f88iGlY&feature=youtu.be. A demonstration of CHRS RainSphere will be included in the presentation.
SANA NetGO: a combinatorial approach to using Gene Ontology (GO) terms to score network alignments.
Hayes, Wayne B; Mamano, Nil
2018-04-15
Gene Ontology (GO) terms are frequently used to score alignments between protein-protein interaction (PPI) networks. Methods exist to measure GO similarity between proteins in isolation, but proteins in a network alignment are not isolated: each pairing is dependent on every other via the alignment itself. Existing measures fail to take into account the frequency of GO terms across networks, instead imposing arbitrary rules on when to allow GO terms. Here we develop NetGO, a new measure that naturally weighs infrequent, informative GO terms more heavily than frequent, less informative GO terms, without arbitrary cutoffs, instead downweighting GO terms according to their frequency in the networks being aligned. This is a global measure applicable only to alignments, independent of pairwise GO measures, in the same sense that the edge-based EC or S3 scores are global measures of topological similarity independent of pairwise topological similarities. We demonstrate the superiority of NetGO in alignments of predetermined quality and show that NetGO correlates with alignment quality better than any existing GO-based alignment measures. We also demonstrate that NetGO provides a measure of taxonomic similarity between species, consistent with existing taxonomic measuresa feature not shared with existing GObased network alignment measures. Finally, we re-score alignments produced by almost a dozen aligners from a previous study and show that NetGO does a better job at separating good alignments from bad ones. Available as part of SANA. whayes@uci.edu. Supplementary data are available at Bioinformatics online.
Wetting Transitions in ^4He/^3He Mixtures on Cesium
NASA Astrophysics Data System (ADS)
Ross, David
1997-03-01
Over the last several years, helium on cesium has proven to be an ideal model system for the study of wetting and wetting transitions(E. Cheng, M.W. Cole, W.F. Saam, and J. Treiner, Phys. Rev. Lett. 67), 1007 (1991).^,(J.E. Rutledge and P. Taborek, Phys. Rev. Lett. 69), 937 (1992).^,(D. Ross, J.E. Rutledge, and P. Taborek, Phys. Rev. Lett. 76), 2350 (1996).. This presentation will focus on the adsorption of binary liquid mixtures of the helium isotopes, ^3He and ^4He, on cesium substrates over a range of temperatures extending from 0.2 K to 1.0 K. The results, spanning ^3He concentrations from 0 to 1, constitute the first experimentally constructed complete wetting phase diagram for a two component liquid at a weakly binding substrate. The wetting behavior is particularly interesting in the vicinity of bulk liquid phase separation. A wetting transition of the ^4He rich liquid between the ^3He rich liquid and the cesium substrate has been found with Tw = 0.53 K. The surface phase transition line associated with this wetting transition is found to extend to both sides of the bulk phase separation line. On the ^3He rich side it is a prewetting line, and on the ^4He rich side it becomes a line of triple point induced dewetting transitions. General arguments indicate that this behavior should be typical of a large class of binary liquid mixtures at weakly binding substrates.
Three-dimensional diffuse optical mammography with ultrasound localization in a human subject
NASA Astrophysics Data System (ADS)
Holboke, Monica J.; Tromberg, Bruce J.; Li, Xingde; Shah, Natasha; Fishkin, Joshua B.; Kidney, D.; Butler, J.; Chance, Britton; Yodh, Arjun G.
2000-04-01
We describe an approach that combines clinical ultrasound and photon migration techniques to enhance the sensitivity and information content of diffuse optical tomography. Measurements were performed on a postmenopausal woman with a single 1.8 X 0.9 cm malignant ductal carcinoma in situ approximately 7.4 mm beneath the skin surface (UCI IRB protocol 95-563). The ultrasound-derived information about tumor geometry enabled us to segment the breast tissue into tumor and background regions. Optical data was obtained with a multifrequency, multiwavelength hand-held frequency-domain photon migration backscattering probe. The optical properties of the tumor and background were then computed using the ultrasound-derived geometrical constraints. An iterative perturbative approach, using parallel processing, provided quantitative information about scattering and absorption simultaneously with the ability to incorporate and resolve complex boundary conditions and geometries. A three to four fold increase in the tumor absorption coefficient and nearly 50% reduction in scattering coefficient relative to background was observed ((lambda) equals 674, 782, 803, and 849 nm). Calculations of the mean physiological parameters reveal fourfold greater tumor total hemoglobin concentration [Hbtot] than normal breast (67 (mu) M vs 16 (mu) M) and tumor hemoglobin oxygen saturation (SOx) values of 63% (vs 73% and 68% in the region surrounding the tumor and the opposite normal tissue, respectively). Comparison of semi-infinite to heterogeneous models shows superior tumor/background contrast for the latter in both absorption and scattering. Sensitivity studies assessing the impact of tumor size and refractive index assumptions, as well as scan direction, demonstrate modest effects on recovered properties.
Sudha, M
2017-09-27
As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.
NASA Astrophysics Data System (ADS)
Hosford, Kyle S.
Clean distributed generation power plants can provide a much needed balance to our energy infrastructure in the future. A high-temperature fuel cell and an absorption chiller can be integrated to create an ideal combined cooling, heat, and power system that is efficient, quiet, fuel flexible, scalable, and environmentally friendly. With few real-world installations of this type, research remains to identify the best integration and operating strategy and to evaluate the economic viability and market potential of this system. This thesis informs and documents the design of a high-temperature fuel cell and absorption chiller demonstration system at a generic office building on the University of California, Irvine (UCI) campus. This work details the extension of prior theoretical work to a financially-viable power purchase agreement (PPA) with regard to system design, equipment sizing, and operating strategy. This work also addresses the metering and monitoring for the system showcase and research and details the development of a MATLAB code to evaluate the economics associated with different equipment selections, building loads, and economic parameters. The series configuration of a high-temperature fuel cell, heat recovery unit, and absorption chiller with chiller exhaust recirculation was identified as the optimal system design for the installation in terms of efficiency, controls, ducting, and cost. The initial economic results show that high-temperature fuel cell and absorption chiller systems are already economically competitive with utility-purchased generation, and a brief case study of a southern California hospital shows that the systems are scalable and viable for larger stationary power applications.
Nasr, Ramzi; Vernica, Rares; Li, Chen; Baldi, Pierre
2012-01-01
In ligand-based screening, retrosynthesis, and other chemoinformatics applications, one of-ten seeks to search large databases of molecules in order to retrieve molecules that are similar to a given query. With the expanding size of molecular databases, the efficiency and scalability of data structures and algorithms for chemical searches are becoming increasingly important. Remarkably, both the chemoinformatics and information retrieval communities have converged on similar solutions whereby molecules or documents are represented by binary vectors, or fingerprints, indexing their substructures such as labeled paths for molecules and n-grams for text, with the same Jaccard-Tanimoto similarity measure. As a result, similarity search methods from one field can be adapted to the other. Here we adapt recent, state-of-the-art, inverted index methods from information retrieval to speed up similarity searches in chemoinformatics. Our results show a several-fold speed-up improvement over previous methods for both thresh-old searches and top-K searches. We also provide a mathematical analysis that allows one to predict the level of pruning achieved by the inverted index approach, and validate the quality of these predictions through simulation experiments. All results can be replicated using data freely downloadable from http://cdb.ics.uci.edu/. PMID:22462644
ClusterCAD: a computational platform for type I modular polyketide synthase design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.
Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less
Knowledge mining from clinical datasets using rough sets and backpropagation neural network.
Nahato, Kindie Biredagn; Harichandran, Khanna Nehemiah; Arputharaj, Kannan
2015-01-01
The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN) is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI) machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.
PCA based feature reduction to improve the accuracy of decision tree c4.5 classification
NASA Astrophysics Data System (ADS)
Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.
2018-03-01
Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.
Wang, Xingmei; Liu, Shu; Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method.
Liu, Zhipeng
2017-01-01
This paper proposes a combination of non-local spatial information and quantum-inspired shuffled frog leaping algorithm to detect underwater objects in sonar images. Specifically, for the first time, the problem of inappropriate filtering degree parameter which commonly occurs in non-local spatial information and seriously affects the denoising performance in sonar images, was solved with the method utilizing a novel filtering degree parameter. Then, a quantum-inspired shuffled frog leaping algorithm based on new search mechanism (QSFLA-NSM) is proposed to precisely and quickly detect sonar images. Each frog individual is directly encoded by real numbers, which can greatly simplify the evolution process of the quantum-inspired shuffled frog leaping algorithm (QSFLA). Meanwhile, a fitness function combining intra-class difference with inter-class difference is adopted to evaluate frog positions more accurately. On this basis, recurring to an analysis of the quantum-behaved particle swarm optimization (QPSO) and the shuffled frog leaping algorithm (SFLA), a new search mechanism is developed to improve the searching ability and detection accuracy. At the same time, the time complexity is further reduced. Finally, the results of comparative experiments using the original sonar images, the UCI data sets and the benchmark functions demonstrate the effectiveness and adaptability of the proposed method. PMID:28542266
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
NASA Astrophysics Data System (ADS)
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Disposition, metabolism and mass balance of [14C]apremilast following oral administration
Hoffmann, Matthew; Kumar, Gondi; Schafer, Peter; Cedzik, Dorota; Capone, Lori; Kei-Fong, Lai; Gu, Zheming; Heller, Dennis; Feng, Hao; Surapaneni, Sekhar; Laskin, Oscar; Wu, Anfan
2011-01-01
Apremilast is a novel, orally available small molecule that specifically inhibits PDE4and thus modulates multiple pro- and anti-inflammatory mediators, and is currently under clinical development for the treatment of psoriasis and psoriatic arthritis.The pharmacokinetics and disposition of [14C]apremilastwas investigated following a single oral dose (20 mg, 100 uCi) to healthy male subjects. Approximately 58% of the radioactive dose was excreted in urine, while faeces contained 39%. Mean Cmax, AUC0 and tmax values for apremilast in plasma were 333 ng/mL, 1970 ng*h/mL and 1.5 h. Apremilast was extensively metabolized via multiple pathways, with unchanged drug representing 45% of the circulating radioactivity and <7% of the excreted radioactivity. The predominant metabolite was O-desmethyl apremilast glucuronide, representing 39% of plasma radioactivity and 34% of excreted radioactivity. The only other radioactive components that represented >4%of the excreted radioactivity were O-demethylated apremilast and its hydrolysis product. Additional minor circulating and excreted compounds were formed via O-demethylation, O-deethylation, N-deacetylation, hydroxylation, glucuronidation and/or hydrolysis. The major metabolites were at least 50-fold less pharmacologically active than apremilast. Metabolic clearance of apremilast was the major route of elimination, while non-enzymatic hydrolysis and excretion of unchanged drug were involved to a lesser extent. PMID:21859393
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
On the use of harmony search algorithm in the training of wavelet neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2015-10-01
Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.
A flexible data-driven comorbidity feature extraction framework.
Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid
2016-06-01
Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.
Booma, P M; Prabhakaran, S; Dhanalakshmi, R
2014-01-01
Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality.
Booma, P. M.; Prabhakaran, S.; Dhanalakshmi, R.
2014-01-01
Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality. PMID:25136661
NASA Astrophysics Data System (ADS)
Wild, Oliver; Sundet, Jostein K.; Prather, Michael J.; Isaksen, Ivar S. A.; Akimoto, Hajime; Browell, Edward V.; Oltmans, Samuel J.
2003-11-01
Two closely related chemical transport models (CTMs) employing the same high-resolution meteorological data (˜180 km × ˜180 km × ˜600 m) from the European Centre for Medium-Range Weather Forecasts are used to simulate the ozone total column and tropospheric distribution over the western Pacific region that was explored by the NASA Transport and Chemical Evolution over the Pacific (TRACE-P) measurement campaign in February-April 2001. We make extensive comparisons with ozone measurements from the lidar instrument on the NASA DC-8, with ozonesondes taken during the period around the Pacific Rim, and with TOMS total column ozone. These demonstrate that within the uncertainties of the meteorological data and the constraints of model resolution, the two CTMs (FRSGC/UCI and Oslo CTM2) can simulate the observed tropospheric ozone and do particularly well when realistic stratospheric ozone photochemistry is included. The greatest differences between the models and observations occur in the polluted boundary layer, where problems related to the simplified chemical mechanism and inadequate horizontal resolution are likely to have caused the net overestimation of about 10 ppb mole fraction. In the upper troposphere, the large variability driven by stratospheric intrusions makes agreement very sensitive to the timing of meteorological features.
ClusterCAD: a computational platform for type I modular polyketide synthase design
Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.; ...
2017-10-11
Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
ReactionMap: an efficient atom-mapping algorithm for chemical reactions.
Fooshee, David; Andronico, Alessio; Baldi, Pierre
2013-11-25
Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .
Song, Weiran; Wang, Hui; Maguire, Paul; Nibouche, Omar
2018-06-07
Partial Least Squares Discriminant Analysis (PLS-DA) is one of the most effective multivariate analysis methods for spectral data analysis, which extracts latent variables and uses them to predict responses. In particular, it is an effective method for handling high-dimensional and collinear spectral data. However, PLS-DA does not explicitly address data multimodality, i.e., within-class multimodal distribution of data. In this paper, we present a novel method termed nearest clusters based PLS-DA (NCPLS-DA) for addressing the multimodality and nonlinearity issues explicitly and improving the performance of PLS-DA on spectral data classification. The new method applies hierarchical clustering to divide samples into clusters and calculates the corresponding centre of every cluster. For a given query point, only clusters whose centres are nearest to such a query point are used for PLS-DA. Such a method can provide a simple and effective tool for separating multimodal and nonlinear classes into clusters which are locally linear and unimodal. Experimental results on 17 datasets, including 12 UCI and 5 spectral datasets, show that NCPLS-DA can outperform 4 baseline methods, namely, PLS-DA, kernel PLS-DA, local PLS-DA and k-NN, achieving the highest classification accuracy most of the time. Copyright © 2018 Elsevier B.V. All rights reserved.
Devasenapathy, Deepa; Kannan, Kathiravan
2015-01-01
The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate. PMID:25793221
Devasenapathy, Deepa; Kannan, Kathiravan
2015-01-01
The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.
Increased ocean-induced melting triggers glacier retreat in northwest and southeast Greenland
NASA Astrophysics Data System (ADS)
Wood, M.; Rignot, E. J.; Fenty, I. G.; Menemenlis, D.; Millan, R.; Morlighem, M.; Mouginot, J.
2017-12-01
Over the past 30 years, the tidewater glaciers of northwest, central west, and southeast Greenland have exhibited widespread retreat, yet we observe different behaviors from one glacier to the next, sometimes within the same fjord. This retreat has been synchronous with oceanic warming in Baffin Bay and the Irminger Sea. Here, we estimate the ocean-induced melt rate of marine-terminating glaciers in these sectors of the Greenland Ice Sheet using simulations from the MITgcm ocean model for various water depths, ocean thermal forcing (TF) and subglacial water fluxes (SG). We use water depth from Ocean Melting Greenland (OMG) bathymetry and inverted airborne gravity, ocean thermal forcing from the Estimating the Circulation and Climate of the Ocean (Phase II, ECCO2) combined with CTD data from 2012 and 2015, and time series of subglacial water flux combining runoff production from the 1-km Regional Atmospheric Climate Model (RACMO2.3) with basal melt beneath land ice from the JPL/UCI ISSM model. Time series of melt rates are formed as a function of grounding line depth, SG flux and TF. We compare the results with the history of ice velocity and ice front retreat to quantify the impact of ice melt by the ocean over past three decades. We find that the timing of ice front retreat coincides with enhanced ocean-induced melt and that abrupt retreat is induced when additional ablation exceeds the magnitude of natural seasonal variations of the glacier front. Sverdrup Gletscher, Umiamako Isbrae, and the northern branch Puisortoq Gletscher in northwest, central west, and southwest Greenland, respectively, began multi-kilometer retreats coincident with ocean warming and enhanced melt. Limited retreat is observed where the bathymetry is shallow, on a prograde slope or glacier is stuck on a sill, e.g. Ussing Braeer in the northwest, Sermeq Avannarleq in central west, and Skinfaxe Gletscher in the southeast. These results illustrate the sensitivity of glaciers to changes in oceanic forcing and the modulating effect of bathymetry on their rate and magnitude of retreat. This work was carried out under a grant with NASA Cryosphere Program and for the EVS-2 Ocean Melting Greenland (OMG) mission.
MicroPET/CT Colonoscopy in long-lived Min mouse using NM404
NASA Astrophysics Data System (ADS)
Christensen, Matthew B.; Halberg, Richard B.; Schutten, Melissa M.; Weichert, Jamey P.
2009-02-01
Colon cancer is a leading cause of death in the US, even though many cases are preventable if tumors are detected early. One technique to promote screening is Computed Tomography Colonography (CTC). NM404 is a second generation phospholipid ether analogue which has demonstrated selective uptake and prolonged retention in 43/43 types of malignant tumors but not inflammatory sites or premalignant lesions. The purpose of this experiment was to evaluate (SWR x B6 )F1.Min mice as a preclinical model to test MicroPET/CT dual modality virtual colonoscopy. Each animal was given an IV injection of 124I-NM404 (100 uCi) 24, 48 and 96 hours prior to scanning on a dedicated microPET/CT system. Forty million counts were histogrammed in 3D and reconstructed using an OSEM 2D algorithm. Immediately after PET acquisition, a 93 m volumetric CT was acquired at 80 kVp, 800 uA and 350 ms exposures. Following CT, the mouse was sacrificed. The entire intestinal tract was excised, washed, insufflated, and scanned ex vivo A total of eight tissue samples from the small intestine were harvested: 5 were benign adenomas, 2 were malignant adenocarcinomas, and 1 was a Peyer's patch (lymph tissue) . The sites of these samples were positioned on CT and PET images based on morphological cues and the distance from the anus. Only 1/8 samples showed tracer uptake. several hot spots in the microPET image were not chosen for histology. (SWR x B6)F1.Min mice develop benign and malignant tumors, making this animal model a strong candidate for future dual modality microPET/CT virtual colonography studies.
[Intensive care unit profesionals's knowledge about non invasive ventilation comparative analysis].
Raurell-Torredà, M; Argilaga-Molero, E; Colomer-Plana, M; Ruiz-García, T; Galvany-Ferrer, A; González-Pujol, A
2015-01-01
The literature highlights the lack of noninvasive ventilation (NIV) protocols and the variability of the knowledge of NIV between intensive care units (ICU) and hospitals, so we want to compare NIV nurses's Knowledge from 4 multipurpose ICU and one surgical ICU. Multicenter, crosscutting, descriptive study in three university hospitals. The survey instrument was validated in a pilot test, and the calculated Kappa index was 0.9. Returning a completed survey is an indication of informed consent. Analysis by Chi square test. 117 responded (65%) nurses, 11±9.7 years of experience in ICU and 9.2±7.2 in use of NIV. One of the multipurpose ICU, was initiated NIV an average of 6 years later than the others (95% CI [3.3 to 8.6], P<.001). Only 23.1% of nurses would place a non-vented mask (with no exhalation port) by conventional ventilator, the rest any kind of face mask. 12.7% believed that the mask must be adjusted to the "2-finger" fit while 29% would seal the mask to the patient's face and cover the mask opening where air escapes to facilitate patient/ventilator synchronization. In the surgical ICU agitation identifies mostly as a complication of NIV compared with multipurpose UCIs (31.6% vs 1.8%, P<.001). 56.4% of nurses do not consider respiratory physiotherapy as nursing care, with no difference between units. Knowledge about types of interface is very dependent on the material of the unit. More training for complications of NIV as agitation and handling secretions it is necessary. Copyright © 2014 Elsevier España, S.L.U. y SEEIUC. All rights reserved.
Development and Simulation of Increased Generation on a Secondary Circuit of a Microgrid
NASA Astrophysics Data System (ADS)
Reyes, Karina
As fossil fuels are depleted and their environmental impacts remain, other sources of energy must be considered to generate power. Renewable sources, for example, are emerging to play a major role in this regard. In parallel, electric vehicle (EV) charging is evolving as a major load demand. To meet reliability and resiliency goals demanded by the electricity market, interest in microgrids are growing as a distributed energy resource (DER). In this thesis, the effects of intermittent renewable power generation and random EV charging on secondary microgrid circuits are analyzed in the presence of a controllable battery in order to characterize and better understand the dynamics associated with intermittent power production and random load demands in the context of the microgrid paradigm. For two reasons, a secondary circuit on the University of California, Irvine (UCI) Microgrid serves as the case study. First, the secondary circuit (UC-9) is heavily loaded and an integral component of a highly characterized and metered microgrid. Second, a unique "next-generation" distributed energy resource has been deployed at the end of the circuit that integrates photovoltaic power generation, battery storage, and EV charging. In order to analyze this system and evaluate the impact of the DER on the secondary circuit, a model was developed to provide a real-time load flow analysis. The research develops a power management system applicable to similarly integrated systems. The model is verified by metered data obtained from a network of high resolution electric meters and estimated load data for the buildings that have unknown demand. An increase in voltage is observed when the amount of photovoltaic power generation is increased. To mitigate this effect, a constant power factor is set. Should the real power change dramatically, the reactive power is changed to mitigate voltage fluctuations.
Membrane protein structure determination — The next generation☆☆☆
Moraes, Isabel; Evans, Gwyndaf; Sanchez-Weatherby, Juan; Newstead, Simon; Stewart, Patrick D. Shaw
2014-01-01
The field of Membrane Protein Structural Biology has grown significantly since its first landmark in 1985 with the first three-dimensional atomic resolution structure of a membrane protein. Nearly twenty-six years later, the crystal structure of the beta2 adrenergic receptor in complex with G protein has contributed to another landmark in the field leading to the 2012 Nobel Prize in Chemistry. At present, more than 350 unique membrane protein structures solved by X-ray crystallography (http://blanco.biomol.uci.edu/mpstruc/exp/list, Stephen White Lab at UC Irvine) are available in the Protein Data Bank. The advent of genomics and proteomics initiatives combined with high-throughput technologies, such as automation, miniaturization, integration and third-generation synchrotrons, has enhanced membrane protein structure determination rate. X-ray crystallography is still the only method capable of providing detailed information on how ligands, cofactors, and ions interact with proteins, and is therefore a powerful tool in biochemistry and drug discovery. Yet the growth of membrane protein crystals suitable for X-ray diffraction studies amazingly remains a fine art and a major bottleneck in the field. It is often necessary to apply as many innovative approaches as possible. In this review we draw attention to the latest methods and strategies for the production of suitable crystals for membrane protein structure determination. In addition we also highlight the impact that third-generation synchrotron radiation has made in the field, summarizing the latest strategies used at synchrotron beamlines for screening and data collection from such demanding crystals. This article is part of a Special Issue entitled: Structural and biophysical characterisation of membrane protein-ligand binding. PMID:23860256
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai
We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less
Geometry-based ensembles: toward a structural characterization of the classification boundary.
Pujol, Oriol; Masip, David
2009-06-01
This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Mike S.; Detwiler, Russell L.; Lao, Kang
2012-12-13
There is increased recognition that geothermal energy resources are more widespread than previously thought, with potential for providing a significant amount of sustainable clean energy worldwide. Recent advances in drilling, completion, and production technology from the oil and gas industry can now be applied to unlock vast new geothermal resources, with some estimates for potential electricity generation from geothermal energy now on the order of 2 million megawatts. The primary objectives of this DOE research effort are to develop and document optimum design configurations and operating practices to produce geothermal power from hot permeable sedimentary and crystalline formations using advancedmore » horizontal well recirculation systems. During Phase I of this research project Terralog Technologies USA and The University of California, Irvine (UCI), have completed preliminary investigations and documentation of advanced design concepts for paired horizontal well recirculation systems, optimally configured for geothermal energy recovery in permeable sedimentary and crystalline formations of varying structure and material properties. We have also identified significant geologic resources appropriate for application of such technology. The main challenge for such recirculation systems is to optimize both the design configuration and the operating practices for cost-effective geothermal energy recovery. These will be strongly influenced by sedimentary formation properties, including thickness and dip, temperature, thermal conductivity, heat capacity, permeability, and porosity; and by working fluid properties.« less
Predicting Coupled Ocean-Atmosphere Modes with a Climate Modeling Hierarchy -- Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Ghil, UCLA; Andrew W. Robertson, IRI, Columbia Univ.; Sergey Kravtsov, U. of Wisconsin, Milwaukee
The goal of the project was to determine midlatitude climate predictability associated with tropical-extratropical interactions on interannual-to-interdecadal time scales. Our strategy was to develop and test a hierarchy of climate models, bringing together large GCM-based climate models with simple fluid-dynamical coupled ocean-ice-atmosphere models, through the use of advanced probabilistic network (PN) models. PN models were used to develop a new diagnostic methodology for analyzing coupled ocean-atmosphere interactions in large climate simulations made with the NCAR Parallel Climate Model (PCM), and to make these tools user-friendly and available to other researchers. We focused on interactions between the tropics and extratropics throughmore » atmospheric teleconnections (the Hadley cell, Rossby waves and nonlinear circulation regimes) over both the North Atlantic and North Pacific, and the ocean’s thermohaline circulation (THC) in the Atlantic. We tested the hypothesis that variations in the strength of the THC alter sea surface temperatures in the tropical Atlantic, and that the latter influence the atmosphere in high latitudes through an atmospheric teleconnection, feeding back onto the THC. The PN model framework was used to mediate between the understanding gained with simplified primitive equations models and multi-century simulations made with the PCM. The project team is interdisciplinary and built on an existing synergy between atmospheric and ocean scientists at UCLA, computer scientists at UCI, and climate researchers at the IRI.« less
NASA Astrophysics Data System (ADS)
Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan
2017-09-01
Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING
Liu, Meizhu; Vemuri, Baba C.
2011-01-01
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643
Jena, Jyotsnarani; Kumar, Ravindra; Dixit, Anshuman; Pandey, Sony; Das, Trupti
2015-01-01
Simultaneous nitrate-N, phosphate and COD removal was evaluated from synthetic waste water using mixed microbial consortia in an anoxic environment under various initial carbon load (ICL) in a batch scale reactor system. Within 6 hours of incubation, enriched DNPAOs (Denitrifying Polyphosphate Accumulating Microorganisms) were able to remove maximum COD (87%) at 2g/L of ICL whereas maximum nitrate-N (97%) and phosphate (87%) removal along with PHB accumulation (49 mg/L) was achieved at 8 g/L of ICL. Exhaustion of nitrate-N, beyond 6 hours of incubation, had a detrimental effect on COD and phosphate removal rate. Fresh supply of nitrate-N to the reaction medium, beyond 6 hours, helped revive the removal rates of both COD and phosphate. Therefore, it was apparent that in spite of a high carbon load, maximum COD and nutrient removal can be maintained, with adequate nitrate-N availability. Denitrifying condition in the medium was evident from an increasing pH trend. PHB accumulation by the mixed culture was directly proportional to ICL; however the time taken for accumulation at higher ICL was more. Unlike conventional EBPR, PHB depletion did not support phosphate accumulation in this case. The unique aspect of all the batch studies were PHB accumulation was observed along with phosphate uptake and nitrate reduction under anoxic conditions. Bioinformatics analysis followed by pyrosequencing of the mixed culture DNA from the seed sludge revealed the dominance of denitrifying population, such as Corynebacterium, Rhodocyclus and Paraccocus (Alphaproteobacteria and Betaproteobacteria). Rarefaction curve indicated complete bacterial population and corresponding number of OTUs through sequence analysis. Chao1 and Shannon index (H’) was used to study the diversity of sampling. “UCI95” and “LCI95” indicated 95% confidence level of upper and lower values of Chao1 for each distance. Values of Chao1 index supported the results of rarefaction curve. PMID:25689047
Jena, Jyotsnarani; Kumar, Ravindra; Dixit, Anshuman; Pandey, Sony; Das, Trupti
2015-01-01
Simultaneous nitrate-N, phosphate and COD removal was evaluated from synthetic waste water using mixed microbial consortia in an anoxic environment under various initial carbon load (ICL) in a batch scale reactor system. Within 6 hours of incubation, enriched DNPAOs (Denitrifying Polyphosphate Accumulating Microorganisms) were able to remove maximum COD (87%) at 2 g/L of ICL whereas maximum nitrate-N (97%) and phosphate (87%) removal along with PHB accumulation (49 mg/L) was achieved at 8 g/L of ICL. Exhaustion of nitrate-N, beyond 6 hours of incubation, had a detrimental effect on COD and phosphate removal rate. Fresh supply of nitrate-N to the reaction medium, beyond 6 hours, helped revive the removal rates of both COD and phosphate. Therefore, it was apparent that in spite of a high carbon load, maximum COD and nutrient removal can be maintained, with adequate nitrate-N availability. Denitrifying condition in the medium was evident from an increasing pH trend. PHB accumulation by the mixed culture was directly proportional to ICL; however the time taken for accumulation at higher ICL was more. Unlike conventional EBPR, PHB depletion did not support phosphate accumulation in this case. The unique aspect of all the batch studies were PHB accumulation was observed along with phosphate uptake and nitrate reduction under anoxic conditions. Bioinformatics analysis followed by pyrosequencing of the mixed culture DNA from the seed sludge revealed the dominance of denitrifying population, such as Corynebacterium, Rhodocyclus and Paraccocus (Alphaproteobacteria and Betaproteobacteria). Rarefaction curve indicated complete bacterial population and corresponding number of OTUs through sequence analysis. Chao1 and Shannon index (H') was used to study the diversity of sampling. "UCI95" and "LCI95" indicated 95% confidence level of upper and lower values of Chao1 for each distance. Values of Chao1 index supported the results of rarefaction curve.
Financial impact of nursing professionals staff required in an Intensive Care Unit.
Araújo, Thamiris Ricci de; Menegueti, Mayra Gonçalves; Auxiliadora-Martins, Maria; Castilho, Valéria; Chaves, Lucieli Dias Pedreschi; Laus, Ana Maria
2016-11-21
to calculate the cost of the average time of nursing care spent and required by patients in the Intensive Care Unit (ICU) and the financial expense for the right dimension of staff of nursing professionals. a descriptive, quantitative research, using the case study method, developed in adult ICU patients. We used the workload index - Nursing Activities Score; the average care time spent and required and the amount of professionals required were calculated using equations and from these data, and from the salary composition of professionals and contractual monthly time values, calculated the cost of direct labor of nursing. the monthly cost of the average quantity of available professionals was US$ 35,763.12, corresponding to 29.6 professionals, and the required staff for 24 hours of care is 42.2 nurses, with a monthly cost of US$ 50,995.44. the numerical gap of nursing professionals was 30% and the monthly financial expense for adaptation of the structure is US$ 15,232.32, which corresponds to an increase of 42.59% in the amounts currently paid by the institution. calcular o custo do tempo médio de assistência de enfermagem despendido e requerido pelos pacientes internados em Unidade de Terapia Intensiva (UTI) e o dispêndio financeiro para adequação do quadro de profissionais de enfermagem. pesquisa descritiva, quantitativa, na modalidade de estudo de caso, desenvolvida na UTI de pacientes adultos. Utilizou-se o índice de carga de trabalho - Nursing Activities Score; o tempo médio de assistência despendido, requerido e o quantitativo de profissionais requerido foram calculados por meio de equações e, a partir desses dados, e de valores da composição salarial dos profissionais e tempo mensal contratual, calculou-se o custo da mão de obra direta de enfermagem. o custo mensal do quantitativo médio de profissionais disponível foi de US$ 35.763,12, correspondendo a 29,6 profissionais, e o requerido para 24 horas de cuidado é de 42,2 profissionais de enfermagem, com custo mensal de US$ 50.995,44. a defasagem numérica de profissionais da enfermagem foi de 30% e o dispêndio financeiro mensal para adequação do quadro é de US$ 15.232,32, o que corresponde a um acréscimo de 42,59% nos valores atualmente desembolsados pela instituição. calcular el costo del tiempo promedio de asistencia de enfermería invertido y requerido por los pacientes internados en la unidad de cuidados intensivos (UCI) y el gasto para la adecuación de la dotación de profesionales de enfermería. investigación descriptiva y cuantitativa en la modalidad de estudio de caso desarrollada en la UCI de pacientes adultos. Se utilizó el índice de carga de trabajo Nursing Activities Score; el tiempo promedio de asistencia invertido y requerido y la cantidad de profesionales necesaria se calculó con ecuaciones y, a partir de estos datos y de los valores de la composición de salario de los profesionales y el tiempo de contrato mensual, se calculó el costo de la mano de obra directa de enfermería. el costo mensual de la cantidad promedio de profesionales disponible fue de US$ 35,763.12, que corresponde a 29.6 profesionales, mientras que el requerido para 24 horas de atención es de 42.2 profesionales de enfermería, con un costo mensual de US$ 50,995.44. el desfase numérico de profesionales de enfermería fue de 30% y el gasto mensual para la adecuación de la dotación es de US$ 15,232.32, que corresponde a un incremento de 42.59% en los valores que actualmente desembolsa la institución.
NASA Astrophysics Data System (ADS)
McLarty, Dustin Fogle
Distributed energy systems are a promising means by which to reduce both emissions and costs. Continuous generators must be responsive and highly efficiency to support building dynamics and intermittent on-site renewable power. Fuel cell -- gas turbine hybrids (FC/GT) are fuel-flexible generators capable of ultra-high efficiency, ultra-low emissions, and rapid power response. This work undertakes a detailed study of the electrochemistry, chemistry and mechanical dynamics governing the complex interaction between the individual systems in such a highly coupled hybrid arrangement. The mechanisms leading to the compressor stall/surge phenomena are studied for the increased risk posed to particular hybrid configurations. A novel fuel cell modeling method introduced captures various spatial resolutions, flow geometries, stack configurations and novel heat transfer pathways. Several promising hybrid configurations are analyzed throughout the work and a sensitivity analysis of seven design parameters is conducted. A simple estimating method is introduced for the combined system efficiency of a fuel cell and a turbine using component performance specifications. Existing solid oxide fuel cell technology is capable of hybrid efficiencies greater than 75% (LHV) operating on natural gas, and existing molten carbonate systems greater than 70% (LHV). A dynamic model is calibrated to accurately capture the physical coupling of a FC/GT demonstrator tested at UC Irvine. The 2900 hour experiment highlighted the sensitivity to small perturbations and a need for additional control development. Further sensitivity studies outlined the responsiveness and limits of different control approaches. The capability for substantial turn-down and load following through speed control and flow bypass with minimal impact on internal fuel cell thermal distribution is particularly promising to meet local demands or provide dispatchable support for renewable power. Advanced control and dispatch heuristics are discussed using a case study of the UCI central plant. Thermal energy storage introduces a time horizon into the dispatch optimization which requires novel solution strategies. Highly efficient and responsive generators are required to meet the increasingly dynamic loads of today's efficient buildings and intermittent local renewable wind and solar power. Fuel cell gas turbine hybrids will play an integral role in the complex and ever-changing solution to local electricity production.
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
Miao, Qiguang; Cao, Ying; Xia, Ge; Gong, Maoguo; Liu, Jiachen; Song, Jianfeng
2016-11-01
AdaBoost has attracted much attention in the machine learning community because of its excellent performance in combining weak classifiers into strong classifiers. However, AdaBoost tends to overfit to the noisy data in many applications. Accordingly, improving the antinoise ability of AdaBoost plays an important role in many applications. The sensitiveness to the noisy data of AdaBoost stems from the exponential loss function, which puts unrestricted penalties to the misclassified samples with very large margins. In this paper, we propose two boosting algorithms, referred to as RBoost1 and RBoost2, which are more robust to the noisy data compared with AdaBoost. RBoost1 and RBoost2 optimize a nonconvex loss function of the classification margin. Because the penalties to the misclassified samples are restricted to an amount less than one, RBoost1 and RBoost2 do not overfocus on the samples that are always misclassified by the previous base learners. Besides the loss function, at each boosting iteration, RBoost1 and RBoost2 use numerically stable ways to compute the base learners. These two improvements contribute to the robustness of the proposed algorithms to the noisy training and testing samples. Experimental results on the synthetic Gaussian data set, the UCI data sets, and a real malware behavior data set illustrate that the proposed RBoost1 and RBoost2 algorithms perform better when the training data sets contain noisy data.
Empirical evaluation of interest-level criteria
NASA Astrophysics Data System (ADS)
Sahar, Sigal; Mansour, Yishay
1999-02-01
Efficient association rule mining algorithms already exist, however, as the size of databases increases, the number of patterns mined by the algorithms increases to such an extent that their manual evaluation becomes impractical. Automatic evaluation methods are, therefore, required in order to sift through the initial list of rules, which the datamining algorithm outputs. These evaluation methods, or criteria, rank the association rules mined from the dataset. We empirically examined several such statistical criteria: new criteria, as well as previously known ones. The empirical evaluation was conducted using several databases, including a large real-life dataset, acquired from an order-by-phone grocery store, a dataset composed from www proxy logs, and several datasets from the UCI repository. We were interested in discovering whether the ranking performed by the various criteria is similar or easily distinguishable. Our evaluation detected, when significant differences exist, three patterns of behavior in the eight criteria we examined. There is an obvious dilemma in determining how many association rules to choose (in accordance with support and confidence parameters). The tradeoff is between having stringent parameters and, therefore, few rules, or lenient parameters and, thus, a multitude of rules. In many cases, our empirical evaluation revealed that most of the rules found by the comparably strict parameters ranked highly according to the interestingness criteria, when using lax parameters (producing significantly more association rules). Finally, we discuss the association rules that ranked highest, explain why these results are sound, and how they direct future research.
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
Medical diagnosis is the process of determining which disease or medical condition explains a person's determinable signs and symptoms. Diagnosis of most of the diseases is very expensive as many tests are required for predictions. This paper aims to introduce an improved hybrid approach for training the adaptive network based fuzzy inference system with Modified Levenberg-Marquardt algorithm using analytical derivation scheme for computation of Jacobian matrix. The goal is to investigate how certain diseases are affected by patient's characteristics and measurement such as abnormalities or a decision about presence or absence of a disease. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system to classify and predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. The proposed hybridised intelligent system was tested with Pima Indian Diabetes dataset obtained from the University of California at Irvine's (UCI) machine learning repository. The proposed method's performance was evaluated based on training and test datasets. In addition, an attempt was done to specify the effectiveness of the performance measuring total accuracy, sensitivity and specificity. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
Improvements on ν-Twin Support Vector Machine.
Khemchandani, Reshma; Saigal, Pooja; Chandra, Suresh
2016-07-01
In this paper, we propose two novel binary classifiers termed as "Improvements on ν-Twin Support Vector Machine: Iν-TWSVM and Iν-TWSVM (Fast)" that are motivated by ν-Twin Support Vector Machine (ν-TWSVM). Similar to ν-TWSVM, Iν-TWSVM determines two nonparallel hyperplanes such that they are closer to their respective classes and are at least ρ distance away from the other class. The significant advantage of Iν-TWSVM over ν-TWSVM is that Iν-TWSVM solves one smaller-sized Quadratic Programming Problem (QPP) and one Unconstrained Minimization Problem (UMP); as compared to solving two related QPPs in ν-TWSVM. Further, Iν-TWSVM (Fast) avoids solving a smaller sized QPP and transforms it as a unimodal function, which can be solved using line search methods and similar to Iν-TWSVM, the other problem is solved as a UMP. Due to their novel formulation, the proposed classifiers are faster than ν-TWSVM and have comparable generalization ability. Iν-TWSVM also implements structural risk minimization (SRM) principle by introducing a regularization term, along with minimizing the empirical risk. The other properties of Iν-TWSVM, related to support vectors (SVs), are similar to that of ν-TWSVM. To test the efficacy of the proposed method, experiments have been conducted on a wide range of UCI and a skewed variation of NDC datasets. We have also given the application of Iν-TWSVM as a binary classifier for pixel classification of color images. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
Canadell, L; Martín-Loeches, I; Díaz, E; Trefler, S; Grau, S; Yebenes, J C; Almirall, J; Olona, M; Sureda, F; Blanquer, J; Rodriguez, A
2015-05-01
To determine the degree of antiviral treatment recommendations adherence and its impact to critical ill patients affected by influenza A(H1N1)pdm09 mortality. Secondary analysis of prospective study. Intensive care (UCI). Patients with influenza A(H1N1)pdm09 in the 2009 pandemic and 2010-11 post-Pandemic periods. Adherence to recommendations was classified as: Total (AT); partial in doses (PD); partial in time (PT), and non-adherence (NA). Viral pneumonia, obesity and mechanical ventilation were considered severity criteria for the administration of high antiviral dose. The analysis was performed using t-test or «chi» square. Survival analysis was performed and adjusted by Cox regression analysis. A total of 1,058 patients, 661 (62.5%) included in the pandemic and 397 (37.5%) in post-pandemic period respectively. Global adherence was achieved in 41.6% (43.9% and 38.0%; P=.07 respectively). Severity criteria were similar in both periods (68.5% vs. 62.8%; P=.06). The AT was 54.7% in pandemic and 36.4% in post-pandemic period respectively (P<.01). The NA (19.7% vs. 11.3%; P<.05) and PT (20.8% vs. 9.9%, P<.01) was more frequent in the post-pandemic period. The mortality rate was higher in the post-pandemic period (30% vs. 21.8%, P<.001). APACHE II (HR=1.09) and hematologic disease (HR=2.2) were associated with a higher mortality and adherence (HR=0.47) was a protective factor. A low degree of adherence to the antiviral treatment was observed in both periods. Adherence to antiviral treatment recommendations was associated with lower mortality rates and should be recommended in critically ill patients with suspected influenza A(H1N1)pdm09. Copyright © 2014 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.
Kianmehr, Keivan; Alhajj, Reda
2008-09-01
In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alptekin, Gokhan
The overall objective of the proposed research is to develop a low cost, high capacity CO{sub 2} sorbent and demonstrate its technical and economic viability for pre-combustion CO{sub 2} capture. The specific objectives supporting our research plan were to optimize the chemical structure and physical properties of the sorbent, scale-up its production using high throughput manufacturing equipment and bulk raw materials and then evaluate its performance, first in bench-scale experiments and then in slipstream tests using actual coal-derived synthesis gas. One of the objectives of the laboratory-scale evaluations was to demonstrate the life and durability of the sorbent for overmore » 10,000 cycles and to assess the impact of contaminants (such as sulfur) on its performance. In the field tests, our objective was to demonstrate the operation of the sorbent using actual coal-derived synthesis gas streams generated by air-blown and oxygen-blown commercial and pilot-scale coal gasifiers (the CO{sub 2} partial pressure in these gas streams is significantly different, which directly impacts the operating conditions hence the performance of the sorbent). To support the field demonstration work, TDA collaborated with Phillips 66 and Southern Company to carry out two separate field tests using actual coal-derived synthesis gas at the Wabash River IGCC Power Plant in Terre Haute, IN and the National Carbon Capture Center (NCCC) in Wilsonville, AL. In collaboration with the University of California, Irvine (UCI), a detailed engineering and economic analysis for the new CO{sub 2} capture system was also proposed to be carried out using Aspen PlusTM simulation software, and estimate its effect on the plant efficiency.« less
Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam
2018-03-01
We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.
Is a project needed to prevent urinary tract infection in patients admitted to spanish ICUs?
Álvarez Lerma, F; Olaechea Astigarraga, P; Nuvials, X; Gimeno, R; Catalán, M; Gracia Arnillas, M P; Seijas Betolaza, I; Palomar Martínez, M
2018-02-06
To analyze epidemiological data of catheter-associated urinary tract infection (CAUTI) in critically ill patients admitted to Spanish ICUs in order to assess the need of implementing a nationwide intervention program to reduce these infections. Non-intervention retrospective annual period prevalence analysis. Participating ICUs in the ENVIN-UCI multicenter registry between the years 2007-2016. Critically ill patients admitted to the ICU with catheter-associated urinary tract infection (CAUTI). Incidence rates per 1,000 catheter-days; urinary catheter utilization ratio; proportion of CAUTIs in relation to total health care-associated infections (HAIs). A total of 187,100 patients, 137,654 (73.6%) of whom had a urinary catheter in place during 1,215,673 days (84% of days of ICU stay) were included. In 4,539 (3.3%) patients with urinary catheter, 4,977 CAUTIs were diagnosed (3.6 episodes per 100 patients with urinary catheter). The CAUTI incidence rate showed a 19% decrease between 2007 and 2016 (4.69 to 3.8 episodes per 1,000 catheter-days), although a sustained urinary catheter utilization ratio was observed (0.84 [0.82-0.86]). The proportion of CAUTI increased from 23.3% to 31.9% of all HAIs controlled in the ICU. Although CAUTI rates have declined in recent years, these infections have become proportionally the first HAIs in the ICU. The urinary catheter utilization ratio remains high in Spanish ICUs. There is room for improvement, so that a CAUTI-ZERO project in our country could be useful. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Montague, Brian T.; Rosen, David L.; Sammartino, Cara; Costa, Michael; Gutman, Roee; Solomon, Liza; Rich, Josiah
2016-01-01
Abstract Populations in corrections continue to have high prevalence of HIV. Expanded testing and treatment programs allow persons to be identified and stabilized on treatment while incarcerated. However, these gains and frequently lost on reentry. Systemic frameworks are needed to monitor linkage to care to guide programs supporting linkage to care. To assess the adequacy of linkage to care on reentry, incarceration data from the National Corrections Reporting Program and data from the Ryan White Services Report from 2010 to 2012 were linked using an encrypted client identification (eUCI). Time from release to the first visit and presence of detectable HIV RNA at linkage were assessed. Multivariate survival analyses were performed to identify associations between patient characteristics and time to linkage. Among those linking, only 43% in Rhode Island and 49% in North Carolina linked within 90 days, and 33% in both states had detectable viremia at the first visit. Those not previously in care and with shorter incarceration experiences longer linkage times. Persons identified as black, had median times greater than 1 year. Using existing datasets, significant gaps in linkage to care for persons with HIV on release from corrections were demonstrated in Rhode Island and North Carolina. Systemically implementing this monitoring to evaluate changes over time would provide important information to support interventions to improve linkage in high-risk populations. Using national datasets for both corrections and clinical data, this framework equally could be used to evaluate experiences of persons with HIV linking to care on release from corrections facilities nationwide. PMID:26836237
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
Blue Guardian: open architecture intelligence, surveillance, and reconnaissance (ISR) demonstrations
NASA Astrophysics Data System (ADS)
Shirey, Russell G.; Borntrager, Luke A.; Soine, Andrew T.; Green, David M.
2017-04-01
The Air Force Research Laboratory (AFRL) - Sensors Directorate has developed the Blue Guardian program to demonstrate advanced sensing technology utilizing open architectures in operationally relevant environments. Blue Guardian has adopted the core concepts and principles of the Air Force Rapid Capabilities Office (AFRCO) Open Mission Systems (OMS) initiative to implement an open Intelligence, Surveillance and Reconnaissance (ISR) platform architecture. Using this new OMS standard provides a business case to reduce cost and program schedules for industry and the Department of Defense (DoD). Blue Guardian is an early adopting program of OMS and provides much needed science and technology improvements, development, testing, and implementation of OMS for ISR purposes. This paper presents results and lessons learned under the Blue Guardian Project Shepherd program which conducted Multi-INT operational demonstrations in the Joint Interagency Task Force - South (JIATF-S) and USSOUTHCOM area of operations in early 2016. Further, on-going research is discussed to enhance Blue Guardian Multi-INT ISR capabilities to support additional mission sets and platforms, including unmanned operations over line of sight (LOS) and beyond line of sight (BLOS) datalinks. An implementation of additional OMS message sets and services to support off-platform sensor command and control using OMS/UCI data structures and dissemination of sensor product data/metadata is explored. Lastly, the Blue Guardian team is working with the AgilePod program to use OMS in a full Government Data Rights Pod to rapidly swap these sensors to different aircraft. The union of the AgilePod (which uses SOSA compliant standards) and OMS technologies under Blue Guardian programs is discussed.
Intelligence system based classification approach for medical disease diagnosis
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
Correia, Luis Cláudio Lemos; Esteves, Fábio P; Carvalhal, Manuela; Souza, Thiago Menezes Barbosa de; Sá, Nicole de; Correia, Vitor Calixto de Almeida; Alexandre, Felipe Kalil Beirão; Lopes, Fernanda; Ferreira, Felipe; Noya-Rabelo, Márcia
2017-06-12
The accuracy of zero coronary calcium score as a filter in patients with chest pain has been demonstrated at the emergency room and outpatient clinics, populations with low prevalence of coronary artery disease (CAD). To test the gatekeeping role of zero calcium score in patients with chest pain admitted to the coronary care unit (CCU), where the pretest probability of CAD is higher than that of other populations. Patients underwent computed tomography for calcium scoring, and obstructive CAD was defined by a minimum 70% stenosis on invasive angiography. In 146 patients studied, the prevalence of CAD was 41%. A zero calcium score was present in 35% of the patients. The sensitivity and specificity of zero calcium score yielded a negative likelihood ratio of 0.16. After logistic regression adjustment for pretest probability, zero calcium score was independently associated with lower odds of CAD (OR = 0.12, 95%CI = 0.04-0.36), increasing the area under the ROC curve of the clinical model from 0.76 to 0.82 (p = 0.006). Zero calcium score provided a net reclassification improvement of 0.20 (p = 0.0018) over the clinical model when using a pretest probability threshold of 10% for discharging without further testing. In patients with pretest probability < 50%, zero calcium score had a negative predictive value of 95% (95%CI = 83%-99%), with a number needed to test of 2.1 for obtaining one additional discharge. Zero calcium score substantially reduces the pretest probability of obstructive CAD in patients admitted to the CCU with acute chest pain. (Arq Bras Cardiol. 2017; [online].ahead print, PP.0-0). A acurácia do escore de cálcio coronário zero como um filtro nos pacientes com dor torácica aguda tem sido demonstrada na sala de emergência e nos ambulatórios, populações com baixa prevalência de doença arterial coronariana (DAC). Testar o papel do escore de cálcio zero como filtro nos pacientes com dor torácica admitidos numa unidade coronariana intensiva (UCI), na qual a probabilidade pré-teste de DAC é maior do que em outras populações. Pacientes foram submetidos a tomografia computadorizada para quantificar o escore de cálcio, DAC obstrutiva foi definida por uma estenose mínima de 70% na cineangiocoronariografia invasiva. Um escore clínico para estimar a probabilidade pré-teste de DAC obstrutiva foi criado em amostra de 370 pacientes, usado para definir subgrupos na definição de valores preditivos negativos do escore zero. Em 146 pacientes estudados, a prevalência de DAC foi 41% e o escore de cálcio zero foi demonstrado em 35% deles. A sensibilidade e a especificidade para escore de cálcio zero resultaram numa razão de verossimilhança negativa de 0,16. Após ajuste com um escore clínico com a regressão logística para a probabilidade pré-teste, o escore de cálcio zero foi preditor independente associado a baixa probabilidade de DAC (OR = 0,12, IC95% = 0,04-0,36), aumentando a área abaixo da curva ROC do modelo clínico de 0,76 para 0,82 (p = 0,006). Considerando a probabilidade de DAC < 10% como ponto de corte para alta precoce, o escore de cálcio aumentou a proporção de pacientes para alta precoce de 8,2% para 25% (NRI = 0,20; p = 0,0018). O escore de cálcio zero apresentou valor preditivo negativo de 90%. Em pacientes com probabilidade pré-teste < 50%, o valor preditivo negativo foi 95% (IC95% = 83%-99%). O escore de cálcio zero reduz substancialmente a probabilidade pré-teste de DAC obstrutiva em pacientes internados em UCI com dor torácica aguda. (Arq Bras Cardiol. 2017; [online].ahead print, PP.0-0).
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Timing, Magnitude and Sources of Ecosystem Respiration in High Arctic Tundra of NW Greenland
NASA Astrophysics Data System (ADS)
Lupascu, M.; Xu, X.; Lett, C.; Maseyk, K. S.; Lindsey, D. S.; Thomas, J. S.; Welker, J. M.; Czimczik, C. I.
2011-12-01
High arctic ecosystems with low vegetation density contain significant stocks of organic carbon (C) in the form of soil organic matter that range in age from modern to ancient. How rapidly these C pools can be mineralized and lost to the atmosphere as CO2 (ecosystem respiration, ER) as a consequence of warming and, or changes in precipitation is a major uncertainty in our understanding of current and future arctic biogeochemistry and for predicting future levels of atmospheric CO2. In a 2-year study (2010-2011), we monitored seasonal changes in the magnitude, timing and sources of ER and soil pore space CO2 in the High Arctic of NW Greenland under current and simulated, future climate conditions. Measurements were taken from May to August at a multi-factorial, long-term climate change experiment in prostrate dwarf-shrub tundra on patterned ground with 5 treatments: (T1) +2oC warming, (T2) +4oC warming, (W) +50% summer precipitation, (T2W) +4oC + 50% summer precipitation, and (C) control. ER (using opaque chambers) and soil CO2 concentrations (wells) were monitored daily via infrared spectroscopy (LI-COR 800 & 840). The source of CO2 was inferred from its radiocarbon (14C) content analyzed at the AMS facility in UCI. CO2 was sampled monthly using molecular sieve traps (chambers) or evacuated canisters (wells). Highest rates of ER are observed on vegetated ground with a maximum in mid summer - reflecting a peak in plant productivity and soil temperature. Respiration rates from bare ground remain similar throughout the summer. Additional soil moisture, administered or due to precipitation events, strongly enhances ER from both vegetated and bare ground. Daily ER budget for the sampling period was of 53.1 mmol C m-2 day-1 for the (C) vegetated areas compared to the 60.0 for the (T2), 68.1 for the (T2W) or the 79.9 for the (W) treatment. ER was highly correlated to temperature (eg. C = 0.8; T2W = 0.8) until middle of July, when heavy precipitation started to occur. In vegetated areas, ER is dominated by recently fixed C, but older C sources contribute during snow melt. Bare areas can be sources of old C throughout the summer. Under ambient climate conditions, pore space CO2 is produced from recently-fixed C near the surface and older C sources at depth. When summer rainfall is increased, recently fixed C is the dominant source of CO2 at all soil depths as recently fixed C is relocated deeper into the soil. Future conditions in NW Greenland will likely result in greater rates of ER, being especially dramatic if summer rainfall increases coincidently with warming. Our findings show that the sources of C efflux will still mostly being dominated by recently fixed C, due to the strong response of plants to water addition.
NASA Astrophysics Data System (ADS)
Sanders, B. F.
2017-12-01
Flooding of coastal and fluvial systems are the most significant natural hazards facing society, and damages have been escalating for decades globally and in the U.S. Almost all metropolitan areas are exposed to flood risk. The threat from river flooding is especially high in India and China, and coastal cities around the world are threatened by storm surge and rising sea levels. Several trends including rising sea levels, urbanization, deforestation, and rural-to-urban population shifts will increase flood exposure in the future. Flood impacts are escalating despite advances in hazards science and extensive effort to manage risks. The fundamental issue is not that flooding is becoming more severe, even though it is in some places, but rather that societies are become more vulnerable to flood impacts. A critical factor contributing to the escalation of flood impacts is that the most vulnerable sectors of communities are left out of processes to prepare for and respond to flooding. Furthermore, the translation of knowledge about flood hazards and vulnerabilities into actionable information for communities has not been effective. In Southern and Baja California, an interdisciplinary team of researchers has partnered with stakeholders in flood vulnerable communities to co-develop flood hazard information systems designed to meet end-user needs for decision-making. The initiative leveraged the power of advanced, fine-scale hydraulic models of flooding to craft intuitive visualizations of context-sensitive scenarios. This presentation will cover the ways by which the process of flood inundation modeling served as a focal point for knowledge development, as well as the unique visualizations that populate on-line information systems accessible here: http://floodrise.uci.edu/online-flood-hazard-viewers/
Ice Sheet System Model as Educational Entertainment
NASA Astrophysics Data System (ADS)
Perez, G.
2013-12-01
Understanding the importance of polar ice sheets and their role in the evolution of Sea Level Rise (SLR), as well as Climate Change, is of paramount importance for policy makers as well as the public and schools at large. For example, polar ice sheets and glaciers currently account for 1/3 of the SLR signal, a ratio that will increase in the near to long-term future, which has tremendous societal ramifications. Consequently, it is important to increase awareness about our changing planet. In our increasingly digital society, mobile and web applications are burgeoning venues for such outreach. The Ice Sheet System Model (ISSM) is a software that was developed at the Jet Propulsion Laboratory/CalTech/NASA, in collaboration with University of California Irvine (UCI), with the goal of better understanding the evolution of polar ice sheets. It is a state-of-the-art framework, which relies on higher-end cluster-computing to address some of the aforementioned challenges. In addition, it is a flexible framework that can be deployed on any hardware; in particular, on mobile platforms such as Android or iOS smart phones. Here, we look at how the ISSM development team managed to port their model to these platforms, what the implications are for improving how scientists disseminate their results, and how a broader audience may familiarize themselves with running complex climate models in simplified scenarios which are highly educational and entertaining in content. We also look at the future plans toward a web portal fully integrated with mobile technologies to deliver the best content to the public, and to provide educational plans/lessons that can be used in grades K-12 as well as collegiate under-graduate and graduate programs.
On-line applications of numerical models in the Black Sea GIS
NASA Astrophysics Data System (ADS)
Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros
2017-09-01
The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.
Single-Molecule Interfacial Electron Transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Wilson
Interfacial electron transfer (ET) plays an important role in many chemical and biological processes. Specifically, interfacial ET in TiO 2-based systems is important to solar energy technology, catalysis, and environmental remediation technology. However, the microscopic mechanism of interfacial ET is not well understood with regard to atomic surface structure, molecular structure, bonding, orientation, and motion. In this project, we used two complementary methodologies; single-molecule fluorescence spectroscopy, and scanning-tunneling microscopy and spectroscopy (STM and STS) to address this scientific need. The goal of this project was to integrate these techniques and measure the molecular dependence of ET between adsorbed molecules andmore » TiO 2 semiconductor surfaces and the ET induced reactions such as the splitting of water. The scanning probe techniques, STM and STS, are capable of providing the highest spatial resolution but not easily time-resolved data. Single-molecule fluorescence spectroscopy is capable of good time resolution but requires further development to match the spatial resolution of the STM. The integrated approach involving Peter Lu at Bowling Green State University (BGSU) and Wilson Ho at the University of California, Irvine (UC Irvine) produced methods for time and spatially resolved chemical imaging of interfacial electron transfer dynamics and photocatalytic reactions. An integral aspect of the joint research was a significant exchange of graduate students to work at the two institutions. This project bridged complementary approaches to investigate a set of common problems by working with the same molecules on a variety of solid surfaces, but using appropriate techniques to probe under ambient (BGSU) and ultrahigh vacuum (UCI) conditions. The molecular level understanding of the fundamental interfacial electron transfer processes obtained in this joint project will be important for developing efficient light harvesting, solar energy conversion, and broadly applicable to problems in interface chemistry and surface physics.« less
Biehl, Michael; Sadowski, Peter; Bhanot, Gyan; Bilal, Erhan; Dayarian, Adel; Meyer, Pablo; Norel, Raquel; Rhrissorrakrai, Kahn; Zeller, Michael D.; Hormoz, Sahand
2015-01-01
Motivation: Animal models are widely used in biomedical research for reasons ranging from practical to ethical. An important issue is whether rodent models are predictive of human biology. This has been addressed recently in the framework of a series of challenges designed by the systems biology verification for Industrial Methodology for Process Verification in Research (sbv IMPROVER) initiative. In particular, one of the sub-challenges was devoted to the prediction of protein phosphorylation responses in human bronchial epithelial cells, exposed to a number of different chemical stimuli, given the responses in rat bronchial epithelial cells. Participating teams were asked to make inter-species predictions on the basis of available training examples, comprising transcriptomics and phosphoproteomics data. Results: Here, the two best performing teams present their data-driven approaches and computational methods. In addition, post hoc analyses of the datasets and challenge results were performed by the participants and challenge organizers. The challenge outcome indicates that successful prediction of protein phosphorylation status in human based on rat phosphorylation levels is feasible. However, within the limitations of the computational tools used, the inclusion of gene expression data does not improve the prediction quality. The post hoc analysis of time-specific measurements sheds light on the signaling pathways in both species. Availability and implementation: A detailed description of the dataset, challenge design and outcome is available at www.sbvimprover.com. The code used by team IGB is provided under http://github.com/uci-igb/improver2013. Implementations of the algorithms applied by team AMG are available at http://bhanot.biomaps.rutgers.edu/wiki/AMG-sc2-code.zip. Contact: meikelbiehl@gmail.com PMID:24994890
NASA Astrophysics Data System (ADS)
Murray, L. T.; Strode, S. A.; Fiore, A. M.; Lamarque, J. F.; Prather, M. J.; Thompson, C. R.; Peischl, J.; Ryerson, T. B.; Allen, H.; Blake, D. R.; Crounse, J. D.; Brune, W. H.; Elkins, J. W.; Hall, S. R.; Hintsa, E. J.; Huey, L. G.; Kim, M. J.; Moore, F. L.; Ullmann, K.; Wennberg, P. O.; Wofsy, S. C.
2017-12-01
Nitrogen oxides (NOx ≡ NO + NO2) in the background atmosphere are critical precursors for the formation of tropospheric ozone and OH, thereby exerting strong influence on surface air quality, reactive greenhouse gases, and ecosystem health. The impact of NOx on atmospheric composition and climate is sensitive to the relative partitioning of reactive nitrogen between NOx and longer-lived reservoir species of the total reactive nitrogen family (NOy) such as HNO3, HNO4, PAN and organic nitrates (RONO2). Unfortunately, global chemistry-climate models (CCMs) and chemistry-transport models (CTMs) have historically disagreed in their reactive nitrogen budgets outside of polluted continental regions, and we have lacked in situ observations with which to evaluate them. Here, we compare and evaluate the NOy budget of six global models (GEOS-Chem CTM, GFDL AM3 CCM, GISS E2.1 CCM, GMI CTM, NCAR CAM CCM, and UCI CTM) using new observations of total reactive nitrogen and its member species from the NASA Atmospheric Tomography (ATom) mission. ATom has now completed two of its four planned deployments sampling the remote Pacific and Atlantic basins of both hemispheres with a comprehensive suite of measurements for constraining reactive photochemistry. All six models have simulated conditions climatologically similar to the deployments. The GMI and GEOS-Chem CTMs have in addition performed hindcast simulations using the MERRA-2 reanalysis, and have been sampled along the flight tracks. We evaluate the performance of the models relative to the observations, and identify factors contributing to their disparate behavior using known differences in model oxidation mechanisms, heterogeneous loss pathways, lightning and surface emissions, and physical loss processes.
Performance Analysis of Entropy Methods on K Means in Clustering Process
NASA Astrophysics Data System (ADS)
Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib
2017-12-01
K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Larour, Eric; Utke, Jean; Bovin, Anton; Morlighem, Mathieu; Perez, Gilberto
2016-11-01
Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM), written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1) carry out type changing through the ISSM, hence facilitating operator overloading, (2) bind to external solvers such as MUMPS and GSL-LU, and (3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Morlighem, M.
2016-12-01
Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of ISSM. We present a comprehensive approach to 1) carry out type changing through ISSM, hence facilitating operator overloading, 2) bind to external solvers such as MUMPS and GSL-LU and 3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the North-East Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, Central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential of enabling a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or alreay collected in Greenland and Antarctica, such as surface altimetry, surface velocities, and/or gravity measurements.
Simulating Ice Dynamics in the Amundsen Sea Sector
NASA Astrophysics Data System (ADS)
Schwans, E.; Parizek, B. R.; Morlighem, M.; Alley, R. B.; Pollard, D.; Walker, R. T.; Lin, P.; St-Laurent, P.; LaBirt, T.; Seroussi, H. L.
2017-12-01
Thwaites and Pine Island Glaciers (TG; PIG) exhibit patterns of dynamic retreat forced from their floating margins, and could act as gateways for destabilization of deep marine basins in the West Antarctic Ice Sheet (WAIS). Poorly constrained basal conditions can cause model predictions to diverge. Thus, there is a need for efficient simulations that account for shearing within the ice column, and include adequate basal sliding and ice-shelf melting parameterizations. To this end, UCI/NASA JPL's Ice Sheet System Model (ISSM) with coupled SSA/higher-order physics is used in the Amundsen Sea Embayment (ASE) to examine threshold behavior of TG and PIG, highlighting areas particularly vulnerable to retreat from oceanic warming and ice-shelf removal. These moving-front experiments will aid in targeting critical areas for additional data collection in ASE as well as for weighting accuracy in further melt parameterization development. Furthermore, a sub-shelf melt parameterization, resulting from Regional Ocean Modeling System (ROMS; St-Laurent et al., 2015) and coupled ISSM-Massachusetts Institute of Technology general circulation model (MITgcm; Seroussi et al., 2017) output, is incorporated and initially tested in ISSM. Data-guided experiments include variable basal conditions and ice hardness, and are also forced with constant modern climate in ISSM, providing valuable insight into i) effects of different basal friction parameterizations on ice dynamics, illustrating the importance of constraining the variable bed character beneath TG and PIG; ii) the impact of including vertical shear in ice flow models of outlet glaciers, confirming its role in capturing complex feedbacks proximal to the grounding zone; and iii) ASE's sensitivity to sub-shelf melt and ice-front retreat, possible thresholds, and how these affect ice-flow evolution.
Data-Driven High-Throughput Prediction of the 3D Structure of Small Molecules: Review and Progress
Andronico, Alessio; Randall, Arlo; Benz, Ryan W.; Baldi, Pierre
2011-01-01
Accurate prediction of the 3D structure of small molecules is essential in order to understand their physical, chemical, and biological properties including how they interact with other molecules. Here we survey the field of high-throughput methods for 3D structure prediction and set up new target specifications for the next generation of methods. We then introduce COSMOS, a novel data-driven prediction method that utilizes libraries of fragment and torsion angle parameters. We illustrate COSMOS using parameters extracted from the Cambridge Structural Database (CSD) by analyzing their distribution and then evaluating the system’s performance in terms of speed, coverage, and accuracy. Results show that COSMOS represents a significant improvement when compared to the state-of-the-art, particularly in terms of coverage of complex molecular structures, including metal-organics. COSMOS can predict structures for 96.4% of the molecules in the CSD [99.6% organic, 94.6% metal-organic] whereas the widely used commercial method CORINA predicts structures for 68.5% [98.5% organic, 51.6% metal-organic]. On the common subset of molecules predicted by both methods COSMOS makes predictions with an average speed per molecule of 0.15s [0.10s organic, 0.21s metal-organic], and an average RMSD of 1.57Å [1.26Å organic, 1.90Å metal-organic], and CORINA makes predictions with an average speed per molecule of 0.13s [0.18s organic, 0.08s metal-organic], and an average RMSD of 1.60Å [1.13Å organic, 2.11Å metal-organic]. COSMOS is available through the ChemDB chemoinformatics web portal at: http://cdb.ics.uci.edu/. PMID:21417267
Trifluoroacetic Acid from Degradation of HCFCs and HFCs: A Three-dimensional Modeling Study
NASA Technical Reports Server (NTRS)
Kotamarthi, V. R.; Rodriguez, J. M.; Ko, M. K. W.; Tromp, T. K.; Sze, N. D.
1998-01-01
Trifluoroacetic acid (TFA; CF3 COOH) is produced by the degradation of the halocarbon replacements HFC-134a, HCFC-124, and HCFC-123. The formation of TFA occurs by HFC/HCFC reacting with OH to yield CF3COX (X = F or CI), followed by in-cloud hydrolysis of CF3COX to form TFA. The TFA formed in the clouds may be reevaporated but is finally deposited onto the surface by washout or dry deposition. Concern has been expressed about the possible long-term accumulation of TFA in certain aquatic environments, pointing to the need to obtain information on the concentrations of TFA in rainwater over scales ranging from local to continental. Based on projected concentrations for HFC-134a, HCFC-124, and HCFC-123 of 80, 10, and 1 pptv in the year 2010, mass conservation arguments imply an annually averaged global concentration of 0.16 microg/L if washout were the only removal mechanism for TFA. We present 3-D simulations of the HFC/HCFC precursors of TFA that include the rates of formation and deposition of TFA based on assumed future emissions. An established (GISS/Harvard/ UCI) but coarse-resolution (8 deg latitude by 10 deg longitude) chemical transport model was used. The anually averaged rainwater concentration of 0.12 gg/L (global) was calculated for the year 2010, when both washout and dry deposition are included as the loss mechanism for TFA from the atmosphere. For some large regions in midnorthern latitudes, values are larger, 0.15-0.20 microg/L. The highest monthly averaged rainwater concentrations of TFA for northern midlatitudes were calculated for the month of July, corresponding to 0.3-0.45 microg/L in parts of North America and Europe. Recent laboratory experiments have suggested that a substantial amount of vibrationally excited CF3CHFO is produced in the degradation of HFC-134a, decreasing the yield of TFA from this compound by 60%. This decrease would reduce the calculated amounts of TFA in rainwater in the year 2010 by 26%, for the same projected concentrations of precursors.
NASA Technical Reports Server (NTRS)
Kotamarthi, V. R.; Rodriquez, J. M.; Ko, M. K. W.; Tromp, T. K.; Sze, N. D.; Prather, Michael J.
1998-01-01
Trifluoroacetic acid (TFA; CF3 COOH) is produced by the degradation of the halocarbon replacements HFC-134a, HCFC-124, and HCFC-123. The formation of TFA occurs by HFC/HCFC reacting with OH to yield CF3COX (X = F or CI), followed by in-cloud hydrolysis of CF to form TFA. The TFA formed in the clouds may be reevaporated but is finally deposited onto the surface by washout or dry deposition. Concern has been expressed about the possible long-term accumulation of TFA in certain aquatic environments, pointing to the need to obtain information on the concentrations of TFA in rainwater over scales ranging from local to continental. Based on projected concentrations for HFC-134a, HCFC-124, and HCFC-123 of 80, 10, and 1 pptv in the year 2010, mass conservation arguments imply an annually averaged global concentration of 0.16 micro g/L if washout were the only removal mechanism for TFA. We present 3-D simulations of the HFC/HCFC precursors of TFA that include the rates of formation and deposition of TFA based on assumed future emissions. An established (GISS[Harvard/ UCI) but coarse-resolution (8 deg latitude by 10 deg longitude) chemical transport model was used. The annually averaged rainwater concentration of 0.12 micro g/L (global) was calculated for the year 2010, when both washout and dry deposition are included as the loss mechanism for TFA from the atmosphere. For some large regions in midnorthern latitudes, values are larger. 0.15-0.20 micro g/L. The highest monthly averaged rainwater concentrations of TFA for northern midlatitudes were calculated for the month of July, corresponding to 0.3 - 0.45 micro g/L in parts of North America and Europe. Recent laboratory experiments have suggested that a substantial amount of vibrationally excited CF3CHFO is produced in the degradation of HFC-134a, decreasing the yield of TFA from this compound by 60%. This decrease would reduce the calculated amounts of TFA in rainwater in the year 2010 by 26%, for the same projected concentrations of precursors.
Trifluoroacetic Acid from Degradation of HCFCs and HFCs: A Three-Dimensional Modeling Study
NASA Technical Reports Server (NTRS)
Kotamarthi, V. R.; Rodriquez, J. M.; Ko, M. K. W.; Tromp, T. K.; Sze, N. D.
1998-01-01
Trifluoroacetic acid (TFA; CF3COOH) is produced by the degradation of the halocarbon replacements HFC-134a, HCFC-124, and HCFC-123. The formation of TFA occurs by HFC/HCFC reacting with OH to yield CF3COX (X = F or CI), followed by in-cloud hydrolysis of CF3COX to form TFA. The TFA formed in the clouds may be reevaporated but is finally deposited onto the surface by washout or dry deposition. Concern has been expressed about the possible long-term accumulation of TFA in certain aquatic environments, pointing to the need to obtain information on the concentrations of TFA in rainwater over scales ranging from local to continental. Based on projected concentrations for HFC-134a, HCFC-124, and HCFC-123 of 80, 10, and 1 pptv in the year 2010, mass conservation arguments imply an annually averaged global concentration of 0.16 micro g/L if washout were the only removal mechanism for TFA. We present 3-D simulations of the HFC/HCFC precursors of TFA that include the rates of formation and deposition of TFA based on assumed future emissions. An established (GISS/Harvard/ UCI) but coarse-resolution (8 deg latitude by 10 deg longitude) chemical transport model was used. The annually averaged rainwater concentration of 0.12 micro g/L (global) was calculated for the year 2010, when both washout and dry deposition are included as the loss mechanism for TFA from the atmosphere. For some large regions in midnorthern latitudes, values are larger, 0.15-0.20 micro g/L. The highest monthly averaged rainwater concentrations of TFA for northern midlatitudes were calculated for the month of July, corresponding to 0.3-0.45 micro g/L in parts of North America and Europe. Recent laboratory experiments have suggested that a substantial amount of vibrationally excited CF3CHFO is produced in the degradation of HFC-134a, decreasing the yield of TFA from this compound by 60%. This decrease would reduce the calculated amounts of TFA in rainwater in the year 2010 by 26%, for the same projected concentrations of precursors.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Kayala, Matthew A; Baldi, Pierre
2012-10-22
Proposing reasonable mechanisms and predicting the course of chemical reactions is important to the practice of organic chemistry. Approaches to reaction prediction have historically used obfuscating representations and manually encoded patterns or rules. Here we present ReactionPredictor, a machine learning approach to reaction prediction that models elementary, mechanistic reactions as interactions between approximate molecular orbitals (MOs). A training data set of productive reactions known to occur at reasonable rates and yields and verified by inclusion in the literature or textbooks is derived from an existing rule-based system and expanded upon with manual curation from graduate level textbooks. Using this training data set of complex polar, hypervalent, radical, and pericyclic reactions, a two-stage machine learning prediction framework is trained and validated. In the first stage, filtering models trained at the level of individual MOs are used to reduce the space of possible reactions to consider. In the second stage, ranking models over the filtered space of possible reactions are used to order the reactions such that the productive reactions are the top ranked. The resulting model, ReactionPredictor, perfectly ranks polar reactions 78.1% of the time and recovers all productive reactions 95.7% of the time when allowing for small numbers of errors. Pericyclic and radical reactions are perfectly ranked 85.8% and 77.0% of the time, respectively, rising to >93% recovery for both reaction types with a small number of allowed errors. Decisions about which of the polar, pericyclic, or radical reaction type ranking models to use can be made with >99% accuracy. Finally, for multistep reaction pathways, we implement the first mechanistic pathway predictor using constrained tree-search to discover a set of reasonable mechanistic steps from given reactants to given products. Webserver implementations of both the single step and pathway versions of ReactionPredictor are available via the chemoinformatics portal http://cdb.ics.uci.edu/.
Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs
Pérez, Ana G.; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl
2018-01-01
Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties (“Picual” and “Arbequina”) used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the proposed methodology for fruit phenol evaluation seems to be convenient for breeding programs aiming at obtaining new cultivars with improved phenolic profile. PMID:29535752
Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs.
Pérez, Ana G; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl
2018-01-01
Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties ("Picual" and "Arbequina") used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the proposed methodology for fruit phenol evaluation seems to be convenient for breeding programs aiming at obtaining new cultivars with improved phenolic profile.
Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.
Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis
2014-04-01
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre
2011-01-01
Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html. PMID:21931590
Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre
2011-01-01
Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally fitted parameters to those of existing models. Finally, we present a complete perturbation study of these parameters to reveal how they can significantly and differentially influence global flux and operational thresholds, suggesting that this modeling approach could help enable the comparative analysis of mitochondria from different systems and pathological states. The procedures and results are available in Mathematica notebooks at http://www.igb.uci.edu/tools/sb/mitochondria-modeling.html.
Perioperative fasting time among cancer patients submitted to gastrointestinal surgeries.
Pereira, Nayara de Castro; Turrini, Ruth Natalia Teresa; Poveda, Vanessa de Brito
2017-05-25
To identify the length of perioperative fasting among patients submitted to gastrointestinal cancer surgeries. Retrospective cohort study, developed by consulting the medical records of 128 patients submitted to gastrointestinal cancer surgeries. The mean of total length of fasting was 107.6 hours. The total length of fasting was significantly associated with the number of symptoms presented before (p=0.000) and after the surgery (p=0.007), the length of hospital stay (p=0.000), blood transfusion (p=0.013), nasogastric tube (p=0.001) and nasojejunal tube (p=0,003), postoperative admission at ICU (p=0.002), postoperative death (p=0.000) and length of preoperative fasting (p=0.000). The length of fasting is associated with complications that affect the quality of the patients' postoperative recovery and nurses' work. The nursing team should be alert to this aspect and being responsible for overseeing the patients' interest, should not permit the unnecessary extension of fasting. Identificar la duración del ayuno perioperatorio entre los pacientes sometidos a cirugías de cáncer gastrointestinal. Estudio de cohorte retrospectivo, por consulta de los registros médicos de 128 pacientes sometidos a cirugías de cáncer gastrointestinal. La media de la duración total del ayuno fue de 107,6 horas. La duración total del ayuno se asoció significativamente con el número de síntomas presentados antes (p=0,000) y después de la cirugía (p=0,007), la duración de la estancia hospitalaria (p=0,000), transfusión de sangre (p=0,013),tubo nasogástrico (P=0,003), ingreso postoperatorio en la UCI (p=0,002), muerte postoperatoria (p=0,000) y duración del ayuno preoperatorio (p=0,000). La duración del ayuno se asocia con complicaciones que afectan la calidad de la recuperación postoperatoria de los pacientes y el trabajo de enfermería. El equipo de enfermería debe estar alerta en relación a este aspecto y ser responsable de supervisar el interés de los pacientes, no permitiendo la extensión innecesaria del ayuno.
Wright, Rachel L
2015-01-01
In para-cycling, competitors are classed based on functional impairment resulting in cyclists with neurological and locomotor impairments competing against each other. In Paralympic competition, classes are combined by using a factoring adjustment to race times to produce the overall medallists. Pacing in short-duration track cycling events is proposed to utilize an "all-out" strategy in able-bodied competition. However, pacing in para-cycling may vary depending on the level of impairment. Analysis of the pacing strategies employed by different classification groups may offer scope for optimal performance; therefore, this study investigated the pacing strategy adopted during the 1-km time trial (TT) and 500-m TT in elite C1 to C3 para-cyclists and able-bodied cyclists. Total times and intermediate split times (125-m intervals; measured to 0.001 s) were obtained from the C1-C3 men's 1-km TT (n = 28) and women's 500-m TT (n = 9) from the 2012 Paralympic Games and the men's 1-km TT (n = 19) and women's 500-m TT (n = 12) from the 2013 UCI World Track Championships from publically available video. Split times were expressed as actual time, factored time (for the para-cyclists) and as a percentage of total time. A two-way analysis of variance was used to investigate differences in split times between the different classifications and the able-bodied cyclists in the men's 1-km TT and between the para-cyclists and able-bodied cyclists in the women's 500-m TT. The importance of position at the first split was investigated with Kendall's Tau-b correlation. The first 125-m split time was the slowest for all cyclists, representing the acceleration phase from a standing start. C2 cyclists were slowest at this 125-m split, probably due to a combination of remaining seated in this acceleration phase and a high proportion of cyclists in this group being trans-femoral amputees. Not all cyclists used aero-bars, preferring to use drop, flat or bullhorn handlebars. Split times increased in the later stages of the race, demonstrating a positive pacing strategy. In the shorter women's 500-m TT, rank at the first split was more strongly correlated with final position than in the longer men's 1-km TT. In conclusion, a positive pacing strategy was adopted by the different para-cycling classes.
Wright, Rachel L.
2016-01-01
In para-cycling, competitors are classed based on functional impairment resulting in cyclists with neurological and locomotor impairments competing against each other. In Paralympic competition, classes are combined by using a factoring adjustment to race times to produce the overall medallists. Pacing in short-duration track cycling events is proposed to utilize an “all-out” strategy in able-bodied competition. However, pacing in para-cycling may vary depending on the level of impairment. Analysis of the pacing strategies employed by different classification groups may offer scope for optimal performance; therefore, this study investigated the pacing strategy adopted during the 1-km time trial (TT) and 500-m TT in elite C1 to C3 para-cyclists and able-bodied cyclists. Total times and intermediate split times (125-m intervals; measured to 0.001 s) were obtained from the C1-C3 men's 1-km TT (n = 28) and women's 500-m TT (n = 9) from the 2012 Paralympic Games and the men's 1-km TT (n = 19) and women's 500-m TT (n = 12) from the 2013 UCI World Track Championships from publically available video. Split times were expressed as actual time, factored time (for the para-cyclists) and as a percentage of total time. A two-way analysis of variance was used to investigate differences in split times between the different classifications and the able-bodied cyclists in the men's 1-km TT and between the para-cyclists and able-bodied cyclists in the women's 500-m TT. The importance of position at the first split was investigated with Kendall's Tau-b correlation. The first 125-m split time was the slowest for all cyclists, representing the acceleration phase from a standing start. C2 cyclists were slowest at this 125-m split, probably due to a combination of remaining seated in this acceleration phase and a high proportion of cyclists in this group being trans-femoral amputees. Not all cyclists used aero-bars, preferring to use drop, flat or bullhorn handlebars. Split times increased in the later stages of the race, demonstrating a positive pacing strategy. In the shorter women's 500-m TT, rank at the first split was more strongly correlated with final position than in the longer men's 1-km TT. In conclusion, a positive pacing strategy was adopted by the different para-cycling classes. PMID:26834643
L2-norm multiple kernel learning and its application to biomedical data fusion
2010-01-01
Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363
NASA Astrophysics Data System (ADS)
An, L.; Rignot, E.; Rivera, A.; Bunetta, M.
2012-12-01
The North and South Patagonia Ice fields are the largest ice masses outside Antarctica in the Southern Hemisphere. During the period 1995-2000, these glaciers lost ice at a rate equivalent to a sea level rise of 0.105 ± 0.001 mm/yr. In more recent years, the glaciers have been thinning more quickly than can be explained by warmer air temperatures and decreased precipitation. A possible cause is an increase in flow speed due to enhanced ablation of the submerged glacier fronts. To understand the dynamics of these glaciers and how they change with time, it is critical to have a detailed view of their ice thickness, the depth of the glacier bed below sea or lake level, how far inland these glaciers remain below sea or lake level, and whether bumps or hollows in the bed may slow down or accelerate their retreat. A grid of free-air gravity data over the Patagonia Glaciers was collected in May 2012 and October 2012, funded by the Gordon and Betty Moore Foundation (GBMF) to measure ice thickness and sea floor bathymetry. This survey combines the Sander Geophysics Limited (SGL) AIRGrav system, SGL laser altimetry and Chilean CECS/UCI ANDREA-2 radar. To obtain high-resolution and high-precision gravity data, the helicopter operates at 50 knots (25.7 m/s) with a grid spacing of 400m and collects gravity data at sub mGal level (1 Gal =1 Galileo = 1 cm/s2) near glacier fronts. We use data from the May 2012 survey to derive preliminarily high-resolution, high-precision thickness estimates and bathymetry maps of Jorge Montt Glacier and San Rafael Glacier. Boat bathymetry data is used to optimize the inversion of gravity over water and radar-derived thickness over glacier ice. The bathymetry maps will provide a breakthrough in our knowledge of the ice fields and enable a new era of glacier modeling and understanding that is not possible at present because ice thickness is not known.
Hybrid fuzzy cluster ensemble framework for tumor clustering from biomolecular data.
Yu, Zhiwen; Chen, Hantao; You, Jane; Han, Guoqiang; Li, Le
2013-01-01
Cancer class discovery using biomolecular data is one of the most important tasks for cancer diagnosis and treatment. Tumor clustering from gene expression data provides a new way to perform cancer class discovery. Most of the existing research works adopt single-clustering algorithms to perform tumor clustering is from biomolecular data that lack robustness, stability, and accuracy. To further improve the performance of tumor clustering from biomolecular data, we introduce the fuzzy theory into the cluster ensemble framework for tumor clustering from biomolecular data, and propose four kinds of hybrid fuzzy cluster ensemble frameworks (HFCEF), named as HFCEF-I, HFCEF-II, HFCEF-III, and HFCEF-IV, respectively, to identify samples that belong to different types of cancers. The difference between HFCEF-I and HFCEF-II is that they adopt different ensemble generator approaches to generate a set of fuzzy matrices in the ensemble. Specifically, HFCEF-I applies the affinity propagation algorithm (AP) to perform clustering on the sample dimension and generates a set of fuzzy matrices in the ensemble based on the fuzzy membership function and base samples selected by AP. HFCEF-II adopts AP to perform clustering on the attribute dimension, generates a set of subspaces, and obtains a set of fuzzy matrices in the ensemble by performing fuzzy c-means on subspaces. Compared with HFCEF-I and HFCEF-II, HFCEF-III and HFCEF-IV consider the characteristics of HFCEF-I and HFCEF-II. HFCEF-III combines HFCEF-I and HFCEF-II in a serial way, while HFCEF-IV integrates HFCEF-I and HFCEF-II in a concurrent way. HFCEFs adopt suitable consensus functions, such as the fuzzy c-means algorithm or the normalized cut algorithm (Ncut), to summarize generated fuzzy matrices, and obtain the final results. The experiments on real data sets from UCI machine learning repository and cancer gene expression profiles illustrate that 1) the proposed hybrid fuzzy cluster ensemble frameworks work well on real data sets, especially biomolecular data, and 2) the proposed approaches are able to provide more robust, stable, and accurate results when compared with the state-of-the-art single clustering algorithms and traditional cluster ensemble approaches.
WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Z; Swanson, T; O’Connor, M
2015-06-15
Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less
Learning to predict chemical reactions.
Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre
2011-09-26
Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal ( http://cdb.ics.uci.edu) under the Toolkits section.
CAMPARE and Cal-Bridge: Two Institutional Networks Increasing Diversity in Astronomy
NASA Astrophysics Data System (ADS)
Rudolph, Alexander L.; Impey, Chris David; Phillips, Cynthia B.; Povich, Matthew S.; Prather, Edward E.; Smecker-Hane, Tammy A.
2015-01-01
We describe two programs, CAMPARE and Cal-Bridge, with the common mission of increasing participation of groups traditionally underrepresented in astronomy, particularly underrepresented minorities and women, through summer research opportunities, in the case of CAMPARE, scholarships in the case of Cal-Bridge, and significant mentoring in both programs, leading to an increase in their numbers successfully pursuing a PhD in the field.CAMPARE is an innovative REU-like summer research program, currently in its sixth year, comprising a network of comprehensive universities and community colleges in Southern California and Arizona (most of which are minority serving institutions), and ten major research institutions (University of Arizona Steward Observatory, the SETI Institute, JPL, Caltech, and the five Southern California UC campuses, UCLA, UCI, UCSD, UCR, and UCSB).In its first five summers, CAMPARE sent a total of 49 students from 10 different CSU and community college campuses to 5 research sites of the program. Of these 49 participants, 25 are women and 24 are men; 22 are Hispanic, 4 are African American, and 1 is Native American, including 6 female Hispanic and 2 female African-American participants. Twenty-one (21) CAMPARE participants have graduated from college, and more than half (11) have attended or are attending a graduate program, including 8 enrolled in PhD or Master's-to-PhD programs. Over twenty CAMPARE students have presented at the AAS and other national meetings.The Cal-Bridge program is a diverse network of higher education institutions in Southern California, including 5 UC campuses, 8 CSU campuses, and 7 community colleges dedicated to the goal of increasing the number of underrepresented minority and female students attending graduate school in astronomy or related fields. We have recently selected our inaugural group of five 2014 Cal-Bridge Scholars, including four women (two Hispanic and one part Native American), and one Hispanic man.Once selected, the Cal-Bridge Scholars benefit from three years of financial support, intensive, joint mentoring by CSU and UC faculty, professional development workshops, and exposure to research opportunities at the participating UC campuses.
CAMPARE and Cal-Bridge: Two Institutional Networks Increasing Diversity in Astronomy
NASA Astrophysics Data System (ADS)
Rudolph, Alexander L.; Impey, Chris David; Phillips, Cynthia B.; Povich, Matthew S.; Prather, Edward E.; Smecker-Hane, Tammy A.
2015-01-01
We describe two programs, CAMPARE and Cal-Bridge, with the common mission of increasing participation of groups traditionally underrepresented in astronomy, particularly underrepresented minorities and women, through summer research opportunities, in the case of CAMPARE, scholarships in the case of Cal-Bridge, and significant mentoring in both programs, leading to an increase in their numbers successfully pursuing a PhD in the field.CAMPARE is an innovative REU-like summer research program, currently in its sixth year, comprising a network of comprehensive universities and community colleges in Southern California and Arizona (most of which are minority serving institutions), and ten major research institutions (University of Arizona Steward Observatory, the SETI Institute, JPL, Caltech, and the five Southern California UC campuses, UCLA, UCI, UCSD, UCR, and UCSB).In its first five summers, CAMPARE sent a total of 49 students from 10 different CSU and community college campuses to 5 research sites of the program. Of these 49 participants, 25 are women and 24 are men; 22 are Hispanic, 4 are African American, and 1 is Native American, including 6 female Hispanic and 2 female African-American participants. Twenty-one (21) CAMPARE participants have graduated from college, and more than half (11) have attended or are attending a graduate program, including 8 enrolled in PhD or Master's-to-PhD programs. Over twenty CAMPARE students have presented at the AAS and other national meetings.The Cal-Bridge program is a diverse network of higher education institutions in Southern California, including 5 UC campuses, 8 CSU campuses, and 7 community colleges dedicated to the goal of increasing the number of underrepresented minority and female students attending graduate school in astronomy or related fields. We have recently selected our inaugural group of five 2014 Cal-Bridge Scholars, including four women (two Hispanic and one part Native American), and one Hispanic man.Once selected, Cal-Bridge Scholars benefit from financial support, intensive, joint mentoring by CSU and UC faculty, professional development workshops, and exposure to research opportunities at the participating UC campuses.
a Biokinetic Model for CESIUM-137 in the Fetus
NASA Astrophysics Data System (ADS)
Jones, Karen Lynn
1995-01-01
Previously, there was no method to determine the dose to the embryo, fetus, fetal organs or placenta from radionuclides within the embryo, fetus, or placenta. In the past, the dose to the fetus was assumed to be equivalent to the dose to the uterus. Watson estimated specific absorbed fractions from various maternal organs to the uterine contents which included the fetus, placenta, and amniotic fluid and Sikov estimated the absorbed dose to the embryo/fetus after assuming 1 uCi of radioactivity was made available to the maternal blood.^{1,2} However, this method did not allow for the calculation of a dose to individual fetal organs or the placenta. The radiation dose to the embryo or fetus from Cs-137 in the fetus and placenta due to a chronic ingestion by the mother was determined. The fraction of Cs-137 in the maternal plasma crossing the placenta to the fetal plasma was estimated. The absorbed dose from Cs-137 in each modelled fetal organ was estimated. Since there has been more research regarding potassium in the human body, and particularly in the pregnant woman, a biokinetic model for potassium was developed first and used as a basis and confirmation of the cesium model. Available pertinent information in physiology, embryology, biokinetics, and radiation dosimetry was utilized. Due to the rapid growth of the fetus and placenta, the pregnancy was divided into four gestational periods. The numerous physiological changes that occurred during pregnancy were considered and an appropriate biokinetic model was developed for each of the gestational periods. The amount of cesium in the placenta, embryo, and fetus was estimated for each period. The dose to the fetus from cesium deposited in the embryo or fetus and in the placenta was determined for each period using Medical Internal Radiation Dosimetry (MIRD) methodology. An uncertainty analysis was also performed to account for the variability of the parameters in the biokinetic model based on the experimental data. The uncertainty in the dose estimate was calculated by propagation of errors after determining the uncertainty in the fetal and placenta mass estimates and the effective half-life.
Learning to Predict Chemical Reactions
Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.
2011-01-01
Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal (http://cdb.ics.uci.edu) under the Toolkits section. PMID:21819139
Bed topography of Jakobshavn Isbræ, Greenland from high-resolution gravity data
NASA Astrophysics Data System (ADS)
An, L.; Rignot, E. J.; Morlighem, M.; Paden, J. D.; Holland, D. M.
2015-12-01
Jakobshavn Isbræ (JKS) is one of the largest marine terminating outlet glaciers in Greenland, feeding a fjord about 800 m deep in the west coast. JKS sped up more than twofold since 2002 and contributed nearly 1 mm of global sea level rise during the period from 2000 to 2011. Holland et al. (2008) posit that these changes coincided with a change in ocean conditions beneath the former ice tongue, yet little is known about the depth of the glacier at its grounding line and upstream of the grounding line and the sea floor depth of the fjord is not well known either. Here, we present a new approach to infer the glacier bed topography, ice thickness and sea floor bathymetry near the grounding line of JKS using high-resolution airborne gravity data from AirGRAV. AirGRAV data were collected in August 2012 from a helicopter platform. The data combined with radio echo sounding data, discrete point soundings in the fjord and the mass conservation approach on land ice. AirGRAV acquired a 500m spacing grid of free-air gravity data at 50 knots with sub-milligal accuracy, i.e. much higher than NASA Operation IceBridge (OIB)'s 5.2km resolution at 290 knots. We use a 3D inversion of the gravity data combining our observations and a forward modeling of the surrounding gravity field, and constrained at the boundary by radar echo soundings and point bathymetry. We reconstruct seamless bed topography at the grounding line that matches interior data and the sea floor bathymetry. The results reveal the true depth at the elbow of the terminal valley and the bed reversal in the proximity of the current grounding line. The analysis provides guidelines for future gravity survey of narrow fjords in terms of spatial resolution and gravity precision. The results also demonstrate the practicality of using high resolution gravity survey to resolve bed topography near glacier snouts, in places where radar sounding has been significantly challenged in the past. The inversion results are critical to re-interpret the recent evolution of JKS and reduce uncertainties in projecting its future contribution to sea level. This work was conducted at UCI and at Caltech's Jet Propulsion Laboratory under a contract with the Gordon and Betty More Foundation and with NASA's Cryospheric Science Program.
NASA Astrophysics Data System (ADS)
Randerson, J. T.
2016-12-01
Recent work has established that year-to-year variability in drought and fire within the Amazon responds to a dual forcing from ocean-atmosphere interactions in the tropical Pacific and North Atlantic. Teleconnections between the Pacific and the Amazon are strongest between October and March, when El Niño contributes to below-average precipitation during the wet season. A reduced build-up of soil moisture during the wet season, in turn, may limit water availability and transpiration in tropical forests during the following dry season, lowering surface humidity, drying fuels, and allowing fires to spread more easily through the understory. The delayed influence of soil moisture through this land - atmosphere coupling provides a means to predict fire season severity 3-6 months before the onset of the dry season. With the aim of creating new opportunities for forest conservation, we have developed an experimental seasonal fire forecasting system for the Amazon. The 2016 fire season severity forecast, released in June by UCI and NASA, predicts unusually high risk across eastern Peru, northern Bolivia, and Brazil. Several surface and satellite data streams confirm that El Niño teleconnections had a significant impact on wet season hydrology within the Amazon. Rainfall observations from the Global Precipitation Climatology Centre provided evidence that cumulative precipitation deficits during August-April were 1 to 2 standard deviations below the long-term mean for most of the basin. These observations were corroborated by strong negative terrestrial water storage anomalies measured by the Gravity Recovery and Climate Experiment, and by fluorescence and vegetation index observations from other sensors that indicated elevated canopy stress. By August 3rd, satellite observations showed above average fire activity in most, but not all, forecast regions. Using additional satellite observations that become available later this year, we plan to describe the full spatial and temporal pattern of fires within the Amazon during the 2016 dry season and evaluate the success of our forecast. As a part of this analysis, we will compare fires from 2016 with other years of extreme drought (i.e., 2005 and 2010), and assess how trends in land use, including regional changes in deforestation, modify El Niño-driven fire risk.
A new, multi-resolution bedrock elevation map of the Greenland ice sheet
NASA Astrophysics Data System (ADS)
Griggs, J. A.; Bamber, J. L.; Grisbed Consortium
2010-12-01
Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F. Rodriguez-Morales and L. Smith, CReSIS, University of Kansas; E. Rignot, JPL and University of California, Irvine; Y. Gim, JPL; J. Mouginot, University of California, Irvine; D. Kirchner, University of Iowa; I. Howat, Byrd Polar Research Center, Ohio State University; I. Joughin and B. Smith, University of Washington; T. Scambos, NSIDC; S. Martin, University of Washington; T. Wagner, NASA.
NASA Astrophysics Data System (ADS)
Millan, R.; Rignot, E. J.; Mouginot, J.; Menemenlis, D.; Morlighem, M.; Wood, M.
2016-12-01
Southeast Greenland has been one of the largest contributors to ice mass losses in Greenland in the last few decades mostly as a result of changes in ice dynamics, and to a lesser extent due to the steady increase in runoff. In 1996, the region was thinning up to the ice divide (Krabill et al., 1999) and the change were clearly of ice dynamics nature. Ice-ocean interactions played a central role in triggering a faster, systematic retreat around year 2002-2005 as water of Atlantic origin started to intrude the fjords in larger amounts due to a change in oceanic circulation in the Irminger sea. The glacier response varied significantly from one glacier to the next in response to the oceanic change, which we attribute to variatioins in fjord bathymetry, geometry control on the glaciers and calving speed of the glaciers. This region is however characterized by a dearth of topography data: the fjords have never been mapped and bed topography is challenging to obtain with radio echo sounding techniques. Here, we employ a combination of Operation IceBridge (OIB) high-resolution airborne gravity from 2016, Ocean Melting Greenland (OMG) EVS-2 mission low resolution gravity from 2016, and OMG bathymetry data from 2016 to map the bed elevation of the glaciers and fjords over the entire southeast Greenland combining gravity, thickness, and bathymetry. The data reveal the true depth of the fjords and the glacier thickness at the ice front, in a seamless fashion. We combine these data with a history of ice discharge combining estimates of ice thickness with a time series of ice velocity going back to the early 1990s. We form a time series of ice discharge, glacier per glacier, which is compared with surface mass balance from the RACMO 1-km downscaled model. We compare the results with simulations of ice melt along the calving faces of the glaciers to draw conclusions about the sensitivity of each glacier to climate forcing and re-interpret their pattern of retreat in the last few decades. The simulation of ice melt employ the MITgcm ocean model constrained by water depth, thermal forcing from ECCO2 model and subglacial water fluxes from RACMO. This work was performed at UCI/JPL under a contract with NASA.
... Research Information Find a Study Resources and Publications Pelvic Floor Disorders Condition Information NICHD Research Information Find a Study Resources and Publications Pelvic Pain About NICHD Research Information Find a Study ...
NASA Astrophysics Data System (ADS)
Flynn, C. M.; Prather, M. J.; Zhu, X.; Strode, S. A.; Steenrod, S. D.; Strahan, S. E.; Lamarque, J. F.; Fiore, A. M.; Horowitz, L. W.; Mao, J.; Murray, L. T.; Shindell, D. T.
2016-12-01
Experience with climate and chemistry model intercomparison projects (MIPs) has demonstrated a diversity in model projections for the chemical greenhouse gases CH4 and O3, even when forced by the same emissions. In general, the MIPs show that models diverge in the distribution of the many key trace species that control the reactivity of the troposphere (defined here as the loss of CH4 and the production and loss of O3). Two possible sources of model differences are the chemistry-transport coupling that creates the pattern of the essential precursor species, and the calculation of reactivity. Suppose that observations, such as those planned by NASA's Atmospheric Tomography (ATom) mission, provide us with enough of a chemical climatology to constrain the modeled distribution of the essential chemical species for the current epoch. Would the models calculate the same reactivity? ATom uses the DC-8 to make in situ measurements slicing through the middle of the Pacific and Atlantic Ocean basins each season and measuring the essential trace species. Unfortunately, ATom measurements will not be available until mid-2017. Here we take the baseline chemistry from one model version (as pseudo-observations) and use it to initialize 6 other global chemistry models. In this pre-ATom MIP, we take the full chemical composition for meridional slices centered on the Dateline (UC Irvine Chemistry-Transport Model, 0.6 deg resolution, 30 layers in the troposphere). We use grid cells between 0.5 and 12 km from 60 S to 60 N to initialize grid cells in the other six models (GEOS-Chem, GFDL-AM3, GISS ModelE2, GSFC GMI, NCAR, UCI CTM). The models are then integrated for 1 day and the key chemical rates (CH4, O3) are saved. These simulations assume that the initialized parcels remain unmixed over the 24 hours, and, hence, model-to-model variations will be due to differences in photochemistry, including clouds. In addition, we assess the relative importance of the precursor species by running sensitivity tests in which each of the major precursors (e.g., NOx, HOOH, HCHO, CO) is perturbed by 10%. Such sensitivity tests can help determine the causes of model differences. Overall, this new approach allows us to characterize each model's chemistry package for a wide range of designated chemical composition. The real test will be with ATom data next year.
... NICHD Research Information Research Goals Activities and Advances Scientific Articles Find a Study More Information Other FAQs Resources ... of life. NICHD Research Goals Activities and Advances Scientific Articles More >> Find a Study Find a Study on ...
Panagopoulos, John; Hush, Julia; Steffens, Daniel; Hancock, Mark J
2017-04-01
Systematic review OBJECTIVE.: The aim of the study was to investigate whether magnetic resonance imaging (MRI) findings change over a relatively short period of time (<1 yr) in people with low back pain (LBP) or sciatica. We also investigated whether there was an association between any change in MRI findings and change in clinical outcomes. MRI offers the potential to identify possible pathoanatomic sources of LBP and/or sciatica; however, the clinical importance of MRI findings remains unclear. Little is known about whether lumbar MRI findings change over the short term and if so whether these changes are associated with changes in clinical outcomes. Medline, EMBASE, and CINAHL databases were searched. Included were cohort studies that performed repeat MRI scans within 12 months in patients with LBP and/or sciatica. Data on study characteristics and change in MRI findings were extracted from included studies. Any data describing associations between change in MRI findings and change in clinical outcomes were also extracted. A total of 12 studies met the inclusion criteria and were included in the review. Pooling was not possible due to heterogeneity of studies and findings. Seven studies reported on changes in disc herniation and reported 15% to 93% of herniations reduced or disappeared in size. Two studies reported on changes in nerve root compression and reported 17% to 91% reduced or disappeared. Only one study reported on the association between change in MRI findings and change in clinical outcomes within 1 year, and found no association. This review found moderate evidence that the natural course of herniations and nerve root compression is favorable over a 1-year period in people with sciatica or LBP. There is a lack of evidence on whether other MRI findings change, and whether changes in MRI findings are associated with changes in clinical outcomes. 1.
Gupta, Surya N; Belay, Brook
2008-01-15
Previous studies have addressed the prevalence of incidental findings largely in healthy adult and pediatric populations. Our study aims to elucidate the prevalence of incidental findings in a pediatric neurology practice. We reviewed the charts of 1618 patients seen at a pediatric neurology practice at a tertiary care center from September 2003 to December 2005 for clinical data and incidental intracranial findings on brain magnetic resonance imaging reports. Incidental findings were divided into two categories: normal or abnormal variants. Clinical and demographic data were assessed for associations with incidental findings. From 1618 charts reviewed, only 666 patients (41% of all patients) had brain MRIs ordered. One-hundred and seventy-one (171) patients (25.7% of all patients; 95% CI: 22.6, 29.0) had incidental findings. Of these, 113 (17.0%; 95% CI: 14.1, 19.8) were classified as normal-variants and 58 (8.7%; 95% CI: 6.6, 10.9) were classified as abnormal. The nature of incidental findings was not related to age group, sex or clinical diagnosis (p=0.29, p=0.31 and p=0.69 respectively). Two patients (0.3%; 95% CI: approximately 0.0, 0.7) required neurosurgical referral. We report a high prevalence of and a low rate of referrals for incidental findings in comparison to previous studies. The present study may help guide management decisions and discussions with patients and families. Future studies should attempt to address issues of associations between primary or secondary diagnoses and intracranial incidental findings in a controlled, prospective fashion.
Schmidt, Carsten Oliver; Hegenscheid, Katrin; Erdmann, Pia; Kohlmann, Thomas; Langanke, Martin; Völzke, Henry; Puls, Ralf; Assel, Heinrich; Biffar, Reiner; Grabe, Hans Jörgen
2013-05-01
Little is known about the psychosocial impact and subjective interpretation of communicated incide ntal findings from whole-body magnetic resonance imaging (wb-MRI). This was addressed with this general population study. Data was based on the Study of Health in Pomerania (SHIP), Germany. SHIP comprised a 1.5-T wb-MRI examination. A postal survey was conducted among the first 471 participants, aged 23-84 years, who received a notification about incidental findings (response 86.0 %, n = 405). The severity of incidental findings was assessed from the participants' and radiologists' perspective. In total, 394 participants (97.3 %) wanted to learn about their health by undergoing wb-MRI. Strong distress while waiting for a potential notification of an incidental finding was reported by 40 participants (9.9 %), whereas 116 (28.6 %) reported moderate to severe psychological distress thereafter. Strong disagreement was noted between the subjective and radiological evaluation of the findings' severity (kappa = 0.02). Almost all participants (n = 389, 96.0 %) were very satisfied with their examination. Despite the high satisfaction of most participants, there were numerous adverse consequences concerning the communication of incidental findings and false expectations about the likely potential benefits of whole-body-MRI. • Disclosed incidental findings from MRI may lead to substantial psychosocial distress. • Subjective and radiological evaluations of incidental findings' severity differ strongly. • Disclosing incidental findings is strongly endorsed by study volunteers. • Study volunteers tend to have false expectations about potential benefits from MRI. • Minimizing stress in study volunteers should be a key aim in MRI research.
Brain Ultrasonography Findings in Neonatal Seizure; a Cross-sectional Study.
Nabavi, Seyed Saeed; Partovi, Parinaz
2017-01-01
Screening of newborns with seizure, who have curable pathologic brain findings, might be able to improve their final outcome by accelerating treatment intervention. The present study aimed to evaluate the brain ultrasonography findings of newborns hospitalized with complaint of seizure. The present cross-sectional study designed to evaluate brain ultrasonography findings of hospitalized newborns complaining seizure. Neonatal seizure was defined as presence of tonic, clonic, myoclonic, and subtle attacks in 1 - 28 day old newborns. 100 newborns with the mean age of 5.82 ± 6.29 days were evaluated (58% male). Most newborns were in the < 10 days age range (76%), term (83%) and with normal birth weight (81%). 22 (22%) of the ultrasonography examinations showed a pathologic finding. A correlation was only found between birth age and probability of the presence of a pathologic problem in the brain as the frequency of these problems was significantly higher in pre-term newborns (p = 0.023). Based on the findings of the present study, frequency of pathologic findings in neonatal brain ultrasonography was 22%. Hemorrhage (12%) and hydrocephaly (7%) were the most common findings. The only factor correlating with increased probability of positive findings was the newborns being pre-term.
Liu, Zhunzhun; Zhang, Lanfeng; Cao, Yuerong; Xia, Wenkai; Zhang, Liying
2018-06-01
To identify the relationship of medical coping styles and benefit finding in Chinese early-stage cancer patients by preliminary pilot study. Three hundred and fifty one cancer patients were recruited from the Affiliated Jiangyin Hospital of Southeast University medical college and the Nantong Tumor Hospital in this study. Measurements were Chinese Benefit Finding Scale, Medical Coping Modes Questionnaire- Chinese version and Distress Thermometer. Regression analysis and pathway analysis were employed to identify the correlation of medical coping styles and benefit finding, and the mediating role of distress. Hierarchical regression analyses showed that confrontation coping style explained 24% of the variance in benefit finding, controlling for demographics and medical variables. While confrontation and resignation coping styles explained 10% and 6% of variance in distress separately. Pathway analyses implied that distress was found to mediate the effect of confrontation coping style on benefit finding in our study. Our study suggested an indirect association between medical coping styles and benefit finding, and a negative correlation of distress to medical coping styles and benefit finding. These results indicated that medical coping styles could influence benefit finding through distress. Copyright © 2018. Published by Elsevier Ltd.
College of Computer, Mathematical, and Natural Sciences
Expanding, Study Finds Sahara Desert is Expanding, Study Finds Learn More Amitabh Varshney Began Role as ' Way New global study of 57 mammal species finds that human-modified landscapes impede travel. Learn Award Winners The awards will enable two students to study abroad and three students to continue their
Mei, Yongxia; Wilson, Susan; Lin, Beilei; Li, Yingshuang; Zhang, Zhenxiang
2018-04-01
To identify whether benefit finding is a mediator or moderator in the relationship between caregiver burden and psychological well-being (anxiety and depression) in Chinese family caregivers of community-dwelling stroke survivors. Family caregivers not only bear a heavy burden, a high level of anxiety and depression, but also experience benefit finding (positive effects result from stressful events). However, the relationships among benefit finding, caregiver burden and psychological well-being in Chinese family caregivers are not well known. This study was a cross-sectional correlational design. Caregivers (n = 145) of stroke survivors were recruited from two communities in Zhengzhou, China. Data were collected by face-to-face interviews with structured questionnaires, examining caregiver burden, benefit finding and psychological well-being of caregivers. A hierarchical regression analysis explored whether caregiver burden and benefit finding were associated with anxiety and depression of caregivers. The moderator role of benefit finding was examined by testing the significance of the interaction between caregiver burden and benefit finding. A mediational model was used to test benefit finding as a mediator between caregiver burden and psychological well-being of caregivers using process in spss 21.0. Caregiver burden and benefit finding were significantly associated with both anxiety and depression of caregivers. Benefit finding did not portray a moderating role, but portrayed the mediator role in the relationship between caregiver burden, anxiety and depression in caregivers. This study provides the preliminary evidence to nurses that intervention focus on benefit finding may help improve the psychological well-being of caregivers. This study offers nurses rational for assessing caregiver's negative emotions and benefit finding. By targeting benefit finding, the nurse may guide caregivers in benefit identification and implement interventions to reduce anxiety, depression and caregiver burden. © 2017 John Wiley & Sons Ltd.
Social identity threat motivates science-discrediting online comments.
Nauroth, Peter; Gollwitzer, Mario; Bender, Jens; Rothmund, Tobias
2015-01-01
Experiencing social identity threat from scientific findings can lead people to cognitively devalue the respective findings. Three studies examined whether potentially threatening scientific findings motivate group members to take action against the respective findings by publicly discrediting them on the Web. Results show that strongly (vs. weakly) identified group members (i.e., people who identified as "gamers") were particularly likely to discredit social identity threatening findings publicly (i.e., studies that found an effect of playing violent video games on aggression). A content analytical evaluation of online comments revealed that social identification specifically predicted critiques of the methodology employed in potentially threatening, but not in non-threatening research (Study 2). Furthermore, when participants were collectively (vs. self-) affirmed, identification did no longer predict discrediting posting behavior (Study 3). These findings contribute to the understanding of the formation of online collective action and add to the burgeoning literature on the question why certain scientific findings sometimes face a broad public opposition.
Social Identity Threat Motivates Science-Discrediting Online Comments
Nauroth, Peter; Gollwitzer, Mario; Bender, Jens; Rothmund, Tobias
2015-01-01
Experiencing social identity threat from scientific findings can lead people to cognitively devalue the respective findings. Three studies examined whether potentially threatening scientific findings motivate group members to take action against the respective findings by publicly discrediting them on the Web. Results show that strongly (vs. weakly) identified group members (i.e., people who identified as “gamers”) were particularly likely to discredit social identity threatening findings publicly (i.e., studies that found an effect of playing violent video games on aggression). A content analytical evaluation of online comments revealed that social identification specifically predicted critiques of the methodology employed in potentially threatening, but not in non-threatening research (Study 2). Furthermore, when participants were collectively (vs. self-) affirmed, identification did no longer predict discrediting posting behavior (Study 3). These findings contribute to the understanding of the formation of online collective action and add to the burgeoning literature on the question why certain scientific findings sometimes face a broad public opposition. PMID:25646725
Experiences with an adaptive design for a dose-finding study in patients with osteoarthritis.
Miller, Frank; Björnsson, Marcus; Svensson, Ola; Karlsten, Rolf
2014-03-01
Dose-finding studies in non-oncology areas are usually conducted in Phase II of the development process of a new potential medicine and it is key to choose a good design for such a study, as the results will decide if and how to proceed to Phase III. The present article has focus on the design of a dose-finding study for pain in osteoarthritis patients treated with the TRPV1 antagonist AZD1386. We describe different design alternatives in the planning of this study, the reasoning for choosing the adaptive design and experiences with conduct and interim analysis. Three alternatives were proposed: one single dose-finding study with parallel design, a programme with a smaller Phase IIa study followed by a Phase IIb dose-finding study, and an adaptive dose-finding study. We describe these alternatives in detail and explain why the adaptive design was chosen for the study. We give insights in design aspects of the adaptive study, which need to be pre-planned, like interim decision criteria, statistical analysis method and setup of a Data Monitoring Committee. Based on the interim analysis it was recommended to stop the study for futility since AZD1386 showed no significant pain decrease based on the primary variable. We discuss results and experiences from the conduct of the study with the novel design approach. Huge cost savings have been done compared to if the option with one dose-finding design for Phase II had been chosen. However, we point out several challenges with this approach. Copyright © 2014 Elsevier Inc. All rights reserved.
EERE Showcase Event (Solar Decathlon 2015)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolles, Eric
The goal of the Orange County Great Park Corporation (Great Park) is to successfully host the U.S. Department of Energy Solar Decathlon 2015. In furtherance of that goal, tasks to be performed within the current reporting period include the following: Task 1.0 Arrange Site Team Visits for January 2015 The Great Park arranged appropriate meeting space for the site team visits over a three-day period, January 8, 2015 through January 10, 2015. Instead of a meeting in Hanger 244, the DOE requested a different meeting space. The working team met in the Operations offices on January 8th. The student teamsmore » were welcomed at the City of Irvine’s Lakeview Senior Center on January 9th, and came back on January 10th for breakout sessions. Task 2.0 Outreach Activities The following outreach activities related to the U.S. Department of Energy Solar Decathlon 2015 occurred during and prior to the event: • Promoted the return of the Solar Decathlon 2015 on the City’s website. (cityofirvine.org) • Promoted the return of the Solar Decathlon 2015 through the City’s and Great Park’s social media channels, including Facebook and Twitter. (facebook.com/cityofirvine, facebook.com/orangecountygreatpark, twitter.com/City_of_Irvine, twitter.com/ocgreatpark) • Promoted the return of the Solar Decathlon 2015 and student visit through a City Council Announcement. • Worked to set-up meetings between the U.S. Department of Energy team and potential donors/key stakeholders in Irvine. • Began ICTV filming and coverage of the Solar Decathlon 2015 teams. This includes student team interviews, interview with Richard King and b-roll footage. • Facilitated an interview with Sarah Farrar and the Orange County Register during the recent student visit in January • Information in May was provided to Irvine Unified School District and Tustin Unified School District promoting the three Education Days that the DOE will host during the event. More DOE information is due in August, which will be forwarded to the school districts that will provide important information for school tours. • A promotional ICTV video has been sent through the Irvine Chamber of Commerce and to dozens of businesses in Irvine promoting the Solar Decathlon 2015 and inviting attendance. • Cover story of Fall Inside Irvine magazine detailed teams competing in Solar Decathlon. Magazine goes to more than 100,000 Irvine residences. • Produced public service announcement with KPCC radio to air 9/28-10/16. Outreach also included a web banner on the station’s website. • Full-page advertisements in special sections of the Orange County Register, including UCI 50th Anniversary magazine that went to over 1 million readers of the Register, the Riverside Press Enterprise and the L.A. Times; Best of Orange County magazine; and Solar Decathlon special section. • Full-page ad in Urban Land magazine Sept./Oct. issue. • Produced ad for Irvine Global Village Festival brochure (tens of thousands in attendance at event.) • Ten posters displayed at Irvine Company properties throughout the City, including the Irvine Spectrum Center. • Rack cards promoting Solar Decathlon displayed at Irvine Spectrum Center, Discovery Science Center, Orange County Farm Bureau (at farmers markets) and at City facilities. • Tote bags promoting Solar Decathlon filled with magnets and rack cards on the event distributed at Irvine Global Village Festival, Great Park farmers market and UCI Festival of Discovery; some 8,000 bags handed out. • E-blast from City of Irvine Community Services Department included information on Solar Decathlon. (List contains 51,000 recipients.) • E-blast to Irvine Co. mailing list sent out 9/30. Web banner posted at shopirvinespectrumcenter.com. • E-blast sent to Orange County Register mailing list on 10/6. • Web banner posted on Orange County Register’s homepage. • E-blast sent by Irvine Chamber of Commerce on 10/9. • E-blast using City’s GovDelivery to 2,100 on 10/12. • Produced additional ads for the Orange County Register to fulfill the in-kind agreement between the DOE and the Register: Friday, Oct. 9, Local front page strip ad; Saturday, Oct. 10, half-page Home & Garden section ad; Sunday, Oct. 11, full page Local section ad; Wednesday, Oct. 14, ½ page Main or Local ad; Friday, Oct. 16, full Local or Main section ad; Saturday, Oct. 17, half-page Home & Garden section ad; Sunday Oct. 18, full page Local or Main ad. • Produced two additional Register ads promoting final days of the event: Full page Main or Local ad for Thursday, Oct. 15 and full page ad in Irvine World News weekly publication. • Produced separate press releases on Solar Decathlon, Volunteer Effort, Children’s Activities Area and Final Days. • Produced and distributed Children’s Activities Days rack cards. • Continued to promote the event on the City’s webpage, Great Park webpage and social media channels. • ICTV produced the “Solar Decathlon Minute” videos, which were posted on the City’s YouTube channel and the solardecathlon.gov website. • Four-minute video promoting Solar Decathlon shown on iShuttles in the City in weeks leading up to event. • Promoted a “Business Day” to local businesses in which businesses could sign up for tour led by Solar Decathlon docents. • Access Irvine Special Event Button running 9/28-10/18/15. • Access Irvine Push Notification on 10/15/15. • Facebook ad boost 10/13-10/18.« less
Establishing the credibility of qualitative research findings: the plot thickens.
Cutcliffe, J R; McKenna, H P
1999-08-01
Qualitative research is increasingly recognized and valued and its unique place in nursing research is highlighted by many. Despite this, some nurse researchers continue to raise epistemological issues about the problems of objectivity and the validity of qualitative research findings. This paper explores the issues relating to the representativeness or credibility of qualitative research findings. It therefore critiques the existing distinct philosophical and methodological positions concerning the trustworthiness of qualitative research findings, which are described as follows: quantitative studies should be judged using the same criteria and terminology as quantitative studies; it is impossible, in a meaningful way, for any criteria to be used to judge qualitative studies; qualitative studies should be judged using criteria that are developed for and fit the qualitative paradigm; and the credibility of qualitative research findings could be established by testing out the emerging theory by means of conducting a deductive quantitative study. The authors conclude by providing some guidelines for establishing the credibility of qualitative research findings.
a Search for Neutrino-Electron Elastic Scattering at the LAMPF Beam Stop.
NASA Astrophysics Data System (ADS)
Brooks, George Alfred
Neutrino-electron elastic scattering reactions play an important role in tests of weak interaction theory. The four reactions which may be considered are:. (nu)(,e) + e('-) (--->) (nu)(,e) + e('-). (nu)(,e)(' )+ e('-) (--->) (nu)(,e) + e('-). (nu)(,(mu)) + e('-) (--->) (nu)(,(mu)) + e('-). (nu)(,(mu))(' )+ e('-) (--->) (nu)(,(mu)) + e(' -). The experimental study of these purely leptonic interactions severely tests basic theoretical ideas, and the reaction with (nu)(,e) has not yet been observed. The characteristics of Los Alamos Meson Physics Facility. (LAMPF) are such that (nu)(,e) is rarely produced, whereas (nu)(,e),(nu)(,(mu)), and(' ). (nu)(,(mu)) are present in equal numbers. Thus, data on all three processes(' ). will be collected simultaneously, but the (nu)(,e) reaction is expected to dominate. However, such studies are exceedingly difficult. The main problem arises from the nature of the event signature (an undetected particle enters the detector producing a single recoil electron) coupled with the miniscule cross sections expected (and therefore low event rates) amid numerous sources of background events. To learn how to reduce the rates of such backgrounds, the UCI Neutrino Group installed in the Neutrino Facility in 1974 a small scale detector system consisting of a sandwich of optical spark chambers and plastic scintillator slabs (0.38 metric tons) which was shielded by 2 1/2" of Pb and enclosed by tanks of liquid scintillator used as an anticoincidence. Electronics and instrumentation, including a CAMAC system interfaced with a PDP-11/05 computer, were housed in a nearby trailer. The 1974 study was carried out with the LAMPF Neutrino Facility shielded against cosmic rays by Fe walls 3' thick and a 4' Fe roof. Nevertheless, stopping cosmic ray muons appeared to give rise to the substantial number of background electron events observed. Several techniques were invoked to reduce the potential background for neutrino -electron elastic scattering to (1.5 (+OR-) 0.5) day('-1). Improved statistics from 1976 gave (1.48 (+OR-) 0.34) day('-1). If this number could be further reduced--by additional shielding, for example--then the experiment would be easier. However, data taken in 1975 with varying thicknesses of Pb on top of the sandwich detector and in 1976 with an additional 1' of Fe on the roof showed that there is no significant advantage to having more Pb or Fe in those areas. The accelerator may also be a source of background. When the accelerator is operating, neutrons from the beam stop can penetrate the Fe shielding to produce an excessive trigger rate (energetic neutrons) or on excessive dead time (thermal neutrons), especially in the more massive ANTI required for the full scale experiment. However, data taken in 1974 with 10(mu)A accelerator current and 4m Fe as beam stop shielding, and in 1976 with 100 (mu)A and 5m Fe, showed that the neutron flux was well under control. The ultimate configuration requires much higher beam currents, but also calls for additional Fe so that neutrons will not be a problem. In both 1974 and 1976 there were no electron events remaining in the accelerator data following subtraction of cosmic ray background. This fact can be used to set an upper limit on the elastic scattering cross section for (nu)(,e):. (sigma)(,exp) < 38 (sigma)(,V-A) with 90% confidence. The results of these studies determined the amount of shielding required for a full scale neutrino experiment, established the need for a very efficient active anticoincidence, and aided the design of a 14.4 metric ton sandwich detector of flash chamber modules and plastic scintillator slabs. Developmental work for the full scale detector system began in 1977, and some of the subsequent construction work is still in progress. However, the Neutrino Facility has been prepared, and portions of the sandwich detector have been installed. The first information on neutrino -electron elastic scattering could be available by the middle of 1982.
Bhadade, Rakesh; Harde, Minal; deSouza, Rosemarie; More, Ashwini; Bharmal, Ramesh
2017-01-01
Nosocomial pneumonia poses great challenge to an intensivist. Detailed information about hospital-acquired pneumonia (HAP) and ventilator-acquired pneumonia (VAP) is crucial for prevention and optimal management, thus improving quality Intensive Care Unit (ICU) care. Hence, we aimed to study the current trend of nosocomial pneumonia in ICU. It was a prospective observational cohort study, conducted in the ICU of a tertiary care teaching public hospital over a period of 18 months. We studied clinical profile and outcome of 120 adult patients who developed VAP/HAP during the study period. We also analyzed the causative organisms, antibiotic sensitivity, and resistance pattern in these patients. Out of 120 patients, 29 patients were HAP and 91 patients were VAP. Mortality was 60% (72), and development of VAP and requirement of mechanical ventilation showed significant association with mortality (P < 0.00001). Most common organism causing HAP was Staphylococcus aureus (43.4%) and VAP was Klebsiella pneumoniae (49%). Maximum antibiotic sensitivity was found to piperacillin + tazobactam (58.8%), followed by imipenem (49.5%) and meropenem (41.8%), whereas maximum antibiotic resistance was found to cefepime (95.1%), followed by ceftazidime and amoxicillin (91.2%). Nosocomial pneumonia showed high incidence (17.44%) and mortality (60%). Common organisms identified were S. aureus and K. pneumoniae. Resistance was high for commonly used antibiotics and high antibiotic sensitivity for piperacillin + tazobactam and carbapenem. Contexte: La pneumonie nosocomiale pose un grand défi à un intensiviste. Des informations détaillées sur la pneumonie acquise dans les hôpitaux (HAP) et la pneumonie acquise par le ventilateur (VAP) sont essentielles pour la prévention et la gestion optimale, améliorant ainsi les soins de soins intensifs de qualité (UTI). Par conséquent, nous avons cherché à étudier la tendance actuelle de la pneumonie nosocomiale en UTI. Matériaux et méthodes: il s'agissait d'une étude de cohorte observationnelle prospective menée dans l'UCI d'un hôpital public d'enseignement tertiaire sur une période de 18 mois. Nous avons étudié le profil clinique et le résultat de 120 patients adultes qui ont développé le VAP / HAP pendant la période d'étude. Nous avons également analysé les organismes responsables, la sensibilité aux antibiotiques et le modèle de résistance chez ces patients. Résultats: Sur 120 patients, 29 patients étaient HAP et 91 patients étaient VAP. La mortalité était de 60% (72), et le développement du VAP et l'exigence de ventilation mécanique ont montré une association significative avec la mortalité (P < 0,00001). L'organisme le plus fréquent causant HAP était Staphylococcus aureus (43,4%) et VAP était Klebsiella pneumoniae (49%). Une sensibilité antibiotique maximale a été observée chez la pipéracilline + tazobactam (58,8%), suivie de l'imipénème (49,5%) et du méropénem (41,8%), alors que la résistance antibiotique maximale a été observée à cefépime (95,1%), suivie de ceftazidime et de l'amoxicilline (91,2%) . la pneumonie nosocomiale a montré une incidence élevée (17,44%) et la mortalité (60%). Les organismes communs identifiés étaient S. aureus et K. pneumoniae. La résistance était élevée pour les antibiotiques couramment utilisés et une forte sensibilité aux antibiotiques pour la pipéracilline + le tazobactam et le carbapénème.
Katz, Ralph V; Green, B Lee; Kressin, Nancy R; James, Sherman A; Wang, Min Qi; Claudio, Cristina; Russell, Stephanie Luise
2009-02-01
The purpose of this follow-up 2003 3-City Tuskegee Legacy Project (TLP) Study was to validate or refute our prior findings from the 1999-2000 4 City TLP Study, which found no evidence to support the widely acknowledged "legacy" of the Tuskegee Syphilis Study (TSS), ie, that blacks are reluctant to participate in biomedical studies due to their knowledge of the TSS. The TLP Questionnaire was administered in this random-digit-dial telephone survey to a stratified random sample of 1162 black, white, and Puerto Rican Hispanic adults in 3 different US cities. The findings from this current 3-City TLP Study fail to support the widely acknowledged "legacy" of the TSS, as awareness of the TSS was not statistically associated with the willingness to participate in biomedical studies. These findings, being in complete agreement with our previous findings from our 1999-2000 4-City TLP, validate those prior findings.
Katz, Ralph V.; Green, B. Lee; Kressin, Nancy R.; James, Sherman A.; Wang, Min Qi; Claudio, Cristina; Russell, Stephanie Luise
2009-01-01
The purpose of this follow-up 2003 3-City Tuskegee Legacy Project (TLP) Study was to validate or refute our prior findings from the 1999–2000 4 City TLP Study, which found no evidence to support the widely acknowledged “legacy” of the Tuskegee Syphilis Study (TSS), ie, that blacks are reluctant to participate in biomedical studies due to their knowledge of the TSS. The TLP Questionnaire was administered in this random-digit-dial telephone survey to a stratified random sample of 1162 black, white, and Puerto Rican Hispanic adults in 3 different US cities. The findings from this current 3-City TLP Study fail to support the widely acknowledged “legacy” of the TSS, as awareness of the TSS was not statistically associated with the willingness to participate in biomedical studies. These findings, being in complete agreement with our previous findings from our 1999–2000 4-City TLP, validate those prior findings. PMID:19378637
Sustainability of Social Programs: A Comparative Case Study Analysis
ERIC Educational Resources Information Center
Savaya, Riki; Spiro, Shimon; Elran-Barak, Roni
2008-01-01
The article reports on the findings of a comparative case study of six projects that operated in Israel between 1980 and 2000. The study findings identify characteristics of the programs, the host organizations, and the social and political environment, which differentiated programs that are sustained from those that are not. The findings reaffirm…
Pattern Finding Skills of Pre-School Children
ERIC Educational Resources Information Center
Tarim, Kamuran
2017-01-01
This study investigates the pattern finding skills of pre-school children and the in-class pattern activities conducted by teachers. The research was designed as a descriptive survey study carried out with a total of 162 children aged 60-77 months from families with middle socio-economic status. The findings of the study revealed that the…
ERIC Educational Resources Information Center
Dana, Robert Q.
Primary findings of the Maine Student Drug Survey, administered during the academic year 1995-96, are presented. Highlights of the study findings, selected prevalence findings, selected risk and protective factor findings, policy recommendations, and recommendations for future research are presented. The questionnaire and the study are described…
Kindler syndrome: a study of five Egyptian cases with evaluation of severity.
Nofal, Eman; Assaf, Magda; Elmosalamy, Khaled
2008-07-01
Kindler syndrome (KS) is a rare genodermatosis characterized by four major features (acral blisters, photosensitivity, poikiloderma, and cutaneous atrophy) and many associated findings. The diagnosis of KS includes clinical features, ultrastructural findings, and, recently, immunostaining and genetic studies. Varying degrees of severity of the syndrome have been described. Five patients with clinical features consistent with KS were included in this study. All patients were subjected to histopathologic and ultrastructural studies. Cases 1 and 2 presented with severe major features, severe mucosal involvement, and many other associated findings. Case 3 presented with severe major features, but mild and limited mucosal involvement and other associated findings. Cases 4 and 5 showed mild major features and few other findings. Histopathology revealed nonspecific poikiloderma. Marked thickening of the lamina densa and splitting of the lamina lucida were the main ultrastructural findings. KS may be classified into mild, moderate, and severe according to the severity of the major features and mucosal involvement. Because histopathologic and ultrastructural findings are not pathognomonic, clinical features remain the mainstay for the diagnosis of KS, and the need for immunostaining with kindlin antibody and genetic studies may be restricted to early cases with incomplete features.
Returning findings within longitudinal cohort studies: the 1958 birth cohort as an exemplar.
Wallace, Susan E; Walker, Neil M; Elliott, Jane
2014-01-01
Population-based, prospective longitudinal cohort studies are considering the issues surrounding returning findings to individuals as a result of genomic and other medical research studies. While guidance is being developed for clinical settings, the process is less clear for those conducting longitudinal research. This paper discusses work conducted on behalf of The UK Cohort and Longitudinal Study Enhancement Resource programme (CLOSER) to examine consent requirements, process considerations and specific examples of potential findings in the context of the 1958 British Birth cohort. Beyond deciding which findings to return, there are questions of whether re-consent is needed and the possible impact on the study, how the feedback process will be managed, and what resources are needed to support that process. Recommendations are made for actions a cohort study should consider taking when making vital decisions regarding returning findings. Any decisions need to be context-specific, arrived at transparently, communicated clearly, and in the best interests of both the participants and the study.
Bradley, Alys; Mukaratirwa, Sydney; Petersen-Jones, Morven
2012-01-01
The authors performed a retrospective study to determine the incidences and range of spontaneous pathology findings in the lymphoid and haemopoietic systems of control Charles River CD-1 mice (Crl: CD-1(ICR) BR). Data was collected from 2,560 mice from control dose groups (104-week and 80-week carcinogenicity studies; 13-week studies), from regulatory studies evaluated at the authors' laboratory between 2005 and 2010. Lesions of the lymphoid and hematopoietic systems were uncommon in 13-week studies but were of high incidence in the carcinogenicity studies (80- or 104-week duration). The most common finding overall was lymphoid hyperplasia within the spleen, thymus, and lymph nodes. The finding of benign lymphoid hyperplasia of the thymus is unusual in other mouse strains. The most common cause of death in the carcinogenicity studies was lymphoma. It is hoped that the results presented here will provide a useful database of incidental pathology findings in CD-1 mice on carcinogenicity studies.
A new lead from genetic studies in depressed siblings: assessing studies of chromosome 3.
Hamilton, Steven P
2011-08-01
Studies by Breen et al. and Pergadia et al. find evidence for genetic linkage between major depressive disorder and the same region on chromosome 3. The linked region contains the gene GRM7, which encodes a protein for the metabotropic glutamate receptor 7 (mGluR7). Both studies used affected sibling pairs, and neither was able to replicate its finding using association studies in individuals from larger population-based studies. Other family-based studies have also failed to find a signal in this region. Furthermore, there are some differences in how the phenotype was classified, with Breen et al. finding evidence only in the most severely affected patients. Nonetheless, the finding is not without other substantive support. A meta-analysis of 3,957 case subjects with major depressive disorder and 3,428 control subjects from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D), Genetics of Recurrent Early-onset Depression (GenRED), and the Genetic Association Information Network-MDD (GAIN-MDD) data sets demonstrated a region of association for major depressive disorder within GRM7. Thus, the significance of this finding remains uncertain, although it points to a gene that might hold significant promise for further developments in studying the pathophysiology and treatment of major depressive disorder.
Suicide in the media: a quantitative review of studies based on non-fictional stories.
Stack, Steven
2005-04-01
Research on the effect of suicide stories in the media on suicide in the real world has been marked by much debate and inconsistent findings. Recent narrative reviews have suggested that research based on nonfictional models is more apt to uncover imitative effects than research based on fictional models. There is, however, substantial variation in media effects within the research restricted to nonfictional accounts of suicide. The present analysis provides some explanations of the variation in findings in the work on nonfictional media. Logistic regression techniques applied to 419 findings from 55 studies determined that: (1) studies measuring the presence of either an entertainment or political celebrity were 5.27 times more likely to find a copycat effect, (2) studies focusing on stories that stressed negative definitions of suicide were 99% less likely to report a copycat effect, (3) research based on television stories (which receive less coverage than print stories) were 79% less likely to find a copycat effect, and (4) studies focusing on female suicide were 4.89 times more likely to report a copycat effect than other studies. The full logistic regression model correctly classified 77.3% of the findings from the 55 studies. Methodological differences among studies are associated with discrepancies in their results.
Incidental findings in children with blunt head trauma evaluated with cranial CT scans.
Rogers, Alexander J; Maher, Cormac O; Schunk, Jeff E; Quayle, Kimberly; Jacobs, Elizabeth; Lichenstein, Richard; Powell, Elizabeth; Miskin, Michelle; Dayan, Peter; Holmes, James F; Kuppermann, Nathan
2013-08-01
Cranial computed tomography (CT) scans are frequently obtained in the evaluation of blunt head trauma in children. These scans may detect unexpected incidental findings. The objectives of this study were to determine the prevalence and significance of incidental findings on cranial CT scans in children evaluated for blunt head trauma. This was a secondary analysis of a multicenter study of pediatric blunt head trauma. Patients <18 years of age with blunt head trauma were eligible, with those undergoing cranial CT scan included in this substudy. Patients with coagulopathies, ventricular shunts, known previous brain surgery or abnormalities were excluded. We abstracted radiology reports for nontraumatic findings. We reviewed and categorized findings by their clinical urgency. Of the 43,904 head-injured children enrolled in the parent study, 15,831 underwent CT scans, and these latter patients serve as the study cohort. On 670 of these scans, nontraumatic findings were identified, with 16 excluded due to previously known abnormalities or surgeries. The remaining 654 represent a 4% prevalence of incidental findings. Of these, 195 (30%), representing 1% of the overall sample, warranted immediate intervention or outpatient follow-up. A small but important number of children evaluated with CT scans after blunt head trauma had incidental findings. Physicians who order cranial CTs must be prepared to interpret incidental findings, communicate with families, and ensure appropriate follow-up. There are ethical implications and potential health impacts of informing patients about incidental findings.
Extraneural findings during peripheral nerve ultrasound: Prevalence and further assessment.
Bignotti, Bianca; Zaottini, Federico; Airaldi, Sonia; Martinoli, Carlo; Tagliafico, Alberto
2018-01-01
In this study we evaluated the frequency and further assessment of extraneural findings encountered during peripheral nerve ultrasound (US). Our retrospective review identified 278 peripheral nerve US examinations of 229 patients performed between December 2014 and December 2015. Reports were reviewed to assess the number of studies without peripheral nerve abnormalities and the frequency and further assessment of extraneural findings. A total of 107 peripheral nerve US examinations of 90 patients (49 men and 41 women, mean age 55 ± 16 years) did not report peripheral nerve abnormalities. Extraneural findings were observed in 24 of 107 (22.4%) studies. Fifteen of the 278 [5.4% (95% confidence interval 2.7%-8.1%)] studies led to a recommendation for additional imaging or clinical evaluation of an extraneural finding. At least 5.4% (15 of 278) of peripheral nerve US studies led to additional clinical or imaging assessment. Muscle Nerve 57: 65-69, 2018. © 2017 Wiley Periodicals, Inc.
Jellema, Sandra; van Hees, Suzanne; Zajec, Jana; van der Sande, Rob; Nijhuis-van der Sanden, Maria Wg; Steultjens, Esther Mj
2017-07-01
Identify the environmental factors that influence stroke-survivors' reengagement in personally valued activities and determine what specific environmental factors are related to specific valued activity types. PubMed, CINAHL and PsycINFO were searched until June 2016 using multiple search-terms for stroke, activities, disability, and home and community environments. An integrated mixed-method systematic review of qualitative, quantitative and mixed-design studies was conducted. Two researchers independently identified relevant studies, assessed their methodological quality and extracted relevant findings. To validly compare and combine the various findings, all findings were classified and grouped by environmental category and level of evidence. The search yielded 4024 records; 69 studies were included. Most findings came from low-evidence-level studies such as single qualitative studies. All findings were consistent in that the following factors facilitated reengagement post-stroke: personal adapted equipment; accessible environments; transport; services; education and information. Barriers were: others' negative attitudes and behaviour; long distances and inconvenient environmental conditions (such as bad weather). Each type of valued activity, such as mobility or work, had its own pattern of environmental influences, social support was a facilitator to all types of activities. Although in many qualitative studies others' attitudes, behaviour and stroke-related knowledge were seen as important for reengagement, these factors were hardly studied quantitatively. A diversity of environmental factors was related to stroke-survivors' reengagement. Most findings came from low-evidence-level studies so that evidence on causal relationships was scarce. In future, more higher-level-evidence studies, for example on the attitudes of significant others, should be conducted.
ERIC Educational Resources Information Center
Killion, Joellen
2016-01-01
Key findings from a new study highlight how Learning Forward's long-standing position on professional learning correlates with practices in high-performing systems in Singapore, Shanghai, Hong Kong, and British Columbia. The purpose of this article is to share key findings from the study so that educators might apply them to strengthening…
Women's Health and Complementary Approaches
... Information Center Ongoing Medical Studies Find Active Medical Research Studies on Cranberry (ClinicalTrials.gov) Find Active Medical Research Studies on Menopause (ClinicalTrials.gov) Safety Information Dream Body ...
Patel, Sapna; Rajalakshmi, B R; Manjunath, G V
2016-11-01
Autopsy aids to the knowledge of pathology by unveiling the rare lesions which are a source of learning from a pathologist's perspective Some of them are only diagnosed at autopsy as they do not cause any functional derangement. This study emphasizes the various incidental lesions which otherwise would have been unnoticed during a person's life. The aim of this study was to determine the spectrum of histopathological findings including neoplastic lesions related or unrelated to the cause of death. It was also aimed to highlight various incidental and interesting lesions in autopsies. A retrospective study of medicolegal autopsies for six years was undertaken in a tertiary care centre to determine the spectrum of histopathological findings including neoplastic lesions related or unrelated to the cause of death and to highlight various incidental and interesting lesions in autopsies. Statistical Analysis: Individual lesions were described in numbers and incidence in percentage. The study consisted of a series of 269 autopsy cases and histopathological findings were studied only in 202 cases. The commonest cause of death was pulmonary oedema. The most common incidental histopathological finding noted was atherosclerosis in 55 (27.2%) cases followed by fatty liver in 40 (19.8%) cases. Neoplastic lesions accounted for 2.47% of cases. This study has contributed a handful of findings to the pool of rare lesions in pathology. Some of these lesions encountered which served as feast to a pathologist are tumour to tumour metastasis, a case with coexistent triple lesions, Dubin Johnson syndrome, von Meyenburg complex, Multilocular Cystic Renal Cell Carcinoma (MCRCC), Autosomal Dominant Polycystic Kidney Disease (ADPKD), liver carcinod and an undiagnosed vaso-occlusive sickle cell crisis. Autopsy studies help in the detection of unexpected findings significant enough to have changed patient management had they been recognized before death.
Herber, Oliver Rudolf; Bücker, Bettina; Metzendorf, Maria-Inti; Barroso, Julie
2017-12-01
Individual qualitative studies provide varied reasons for why heart failure patients do not engage in self-care, yet articles that aggregated primary studies on the subject have methodological weaknesses that justified the execution of a qualitative meta-summary. The aim of this study is to integrate the findings of qualitative studies pertaining to barriers and facilitators to self-care using meta-summary techniques. Qualitative meta-summary techniques by Sandelowski and Barroso were used to combine the findings of qualitative studies. Meta-summary techniques include: (1) extraction of relevant statements of findings from each report; (2) reduction of these statements into abstracted findings and (3) calculation of effect sizes. Databases were searched systematically for qualitative studies published between January 2010 and July 2015. Out of 2264 papers identified, 31 reports based on the accounts of 814 patients were included in the meta-summary. A total of 37 statements of findings provided a comprehensive inventory of findings across all reports. Out of these statements of findings, 21 were classified as barriers, 13 as facilitators and three were classed as both barriers and facilitators. The main themes relating to barriers and facilitators to self-care were: beliefs, benefits of self-care, comorbidities, financial constraints, symptom recognition, ethnic background, inconsistent self-care, insufficient information, positive and negative emotions, organizational context, past experiences, physical environment, self-initiative, self-care adverse effects, social context and personal preferences. Based on the meta-findings identified in this study, future intervention development could address these barriers and facilitators in order to further enhance self-care abilities in heart failure patients.
CLINICAL APPROACH TO THE DIAGNOSTIC EVALUATION OF HERDITARY AND ACQUIRED NEUROMUSCULAR DISEASES
McDonald, Craig M.
2012-01-01
SYNOPSIS In the context of a neuromuscular disease diagnostic evaluation, the clinician still must be able to obtain a relevant patient and family history and perform focused general, musculoskeletal, neurologic and functional physical examinations to direct further diagnostic evaluations. Laboratory studies for hereditary neuromuscular diseases include relevant molecular genetic studies. The EMG and nerve conduction studies remain an extension of the physical examination and help to guide further diagnostic studies such as molecular genetic studies, and muscle and nerve biopsies. All diagnostic information needs to be interpreted not in isolation, but within the context of relevant historical information, family history, physical examination findings, and laboratory data, electrophysiologic findings, pathologic findings, and molecular genetic findings if obtained. PMID:22938875
ERIC Educational Resources Information Center
Conole, Grainne; de Laat, Maarten; Dillon, Teresa; Darby, Jonathan
2008-01-01
The paper describes the findings from a study of students' use and experience of technologies. A series of in-depth case studies were carried out across four subject disciplines, with data collected via survey, audio logs and interviews. The findings suggest that students are immersed in a rich, technology-enhanced learning environment and that…
Kaewlai, Rathachai; Greene, Reginald E; Asrani, Ashwin V; Abujudeh, Hani H
2010-09-01
The aim of this study was to assess the potential impact of staggered radiologist work shifts on the timeliness of communicating urgent imaging findings that are detected on portable overnight chest radiography of hospitalized patients. The authors conducted a retrospective study that compared the interval between the acquisition and communication of urgent findings on portable overnight critical care chest radiography detected by an early-morning shift for radiologists (3 am to 11 am) with historical experience with a standard daytime shift (8 am to 5 pm) in the detection and communication of urgent findings in a similar patient population a year earlier. During a 4-month period, 6,448 portable chest radiographic studies were interpreted on the early-morning radiologist shift. Urgent findings requiring immediate communication were detected in 308 (4.8%) studies. The early-morning shift of radiologists, on average, communicated these findings 2 hours earlier compared with the historical control group (P < .001). Staggered radiologist work shifts that include an early-morning shift can improve the timeliness of reporting urgent findings on overnight portable chest radiography of hospitalized patients. Published by Elsevier Inc.
Children and Grief: When a Parent Dies.
ERIC Educational Resources Information Center
Worden, J. William
The research findings on childhood grief are often inconsistent and differ among studies. This book presents major findings from the Child Bereavement Study and looks at the implications of these of these findings for intervention with bereaved children and their families. Following an introduction describing the methodology of the Child…
This presentation provides an overview and initial insights into the findings based on results from EPA's PM Supersites Program and related studies. Many key atmospheric sciences findings have been identified through the research conducted during the last five years as part of t...
Vedula, S Swaroop; Goldman, Palko S; Rona, Ilyas J; Greene, Thomas M; Dickersin, Kay
2012-08-13
Previous studies have documented strategies to promote off-label use of drugs using journal publications and other means. Few studies have presented internal company communications that discussed financial reasons for manipulating the scholarly record related to off-label indications. The objective of this study was to build on previous studies to illustrate implementation of a publication strategy by the drug manufacturer for four off-label uses of gabapentin (Neurontin, Pfizer, Inc.): migraine prophylaxis, treatment of bipolar disorders, neuropathic pain, and nociceptive pain. We included in this study internal company documents, email correspondence, memoranda, study protocols and reports that were made publicly available in 2008 as part of litigation brought by consumers and health insurers against Pfizer for fraudulent sales practices in its marketing of gabapentin (see http://pacer.mad.uscourts.gov/dc/cgi-bin/recentops.pl?filename=saris/pdf/ucl%20opinion.pdf for the Court's findings).We reviewed documents pertaining to 20 clinical trials, 12 of which were published. We categorized our observations related to reporting biases and linked them with topics covered in internal documents, that is, deciding what should and should not be published and how to spin the study findings (re-framing study results to explain away unfavorable findings or to emphasize favorable findings); and where and when findings should be published and by whom. We present extracts from internal company marketing assessments recommending that Pfizer and Parke-Davis (Pfizer acquired Parke-Davis in 2000) adopt a publication strategy to conduct trials and disseminate trial findings for unapproved uses rather than an indication strategy to obtain regulatory approval. We show internal company email correspondence and documents revealing how publication content was influenced and spin was applied; how the company selected where trial findings would be presented or published; how publication of study results was delayed; and the role of ghost authorship. Taken together, the extracts we present from internal company documents illustrate implementation of a strategy at odds with unbiased study conduct and dissemination. Our findings suggest that Pfizer and Parke-Davis's publication strategy had the potential to distort the scientific literature, and thus misinform healthcare decision-makers.
Finding Susceptibility Genes for Developmental Disorders of Speech: The Long and Winding Road.
ERIC Educational Resources Information Center
Felsenfeld, Susan
2002-01-01
This article explores the gene-finding process for developmental speech disorders (DSDs), specifically disorders of articulation/phonology and stuttering. It reviews existing behavioral genetic studies of these phenotypes, discusses roadblocks that may impede the molecular study of DSDs, and reviews the findings of the small number of molecular…
Managerial Competencies for Middle Managers: Some Empirical Findings from China
ERIC Educational Resources Information Center
Qiao, June Xuejun; Wang, Wei
2009-01-01
Purpose: This study aims to identify managerial competencies required for successful middle managers in China. Design/methodology/approach: First a questionnaire survey was distributed among MBA and EMBA students at a major university in China, and then two case studies were conducted to collect more in-depth data. Findings: The findings of this…
New Research Findings on Emotionally Focused Therapy: Introduction to Special Section
ERIC Educational Resources Information Center
Johnson, Susan M.; Wittenborn, Andrea K.
2012-01-01
This article introduces the special section "New Research Findings on Emotionally Focused Therapy." Emotionally focused couple therapy researchers have a strong tradition of outcome and process research and this special section presents new findings from three recent studies. The first study furthers the goal of determining the kinds of clients…
Halum, Stacey L; Shemirani, Nima L; Merati, Albert L; Jaradeh, Safwan; Toohill, Robert J
2006-04-01
We reviewed a large series of cricopharyngeal (CP) muscle electromyography (EMG) results and compared them with the EMG results from the inferior constrictor (IC), thyroarytenoid, (TA), cricothyroid (CT), and posterior cricoarytenoid (PCA) muscles. We performed a retrospective review of all CP muscle EMG reports from studies performed between January 1996 and June 2003. All of the tested elements from the CP muscle EMG reports were recorded. The EMG results were recorded for the ipsilateral IC, TA, CT, and PCA muscles if they were simultaneously tested. Each muscle result was classified as normal, neurogenic inactive axonal injury (IAI), or neurogenic active axonal injury (AAI), and the muscle findings were compared. A patient chart review was performed to determine a clinical correlation. Fifty-nine patients underwent CP muscle EMG. Eighteen patients had bilateral EMG studies, making a total of 77 CP muscle studies. Nineteen sets of CP muscle results were normal, 43 demonstrated neurogenic IAI, and 15 demonstrated neurogenic AAI. The ipsilateral IC and CP muscles had the same innervation status in 27 of 28 studies (p < .0001). When the ipsilateral TA muscle was studied simultaneously with the CP muscle, 31 of 50 studies had the same innervation status (p = .005). The ipsilateral CT and CP muscles demonstrated the same innervation status in 40 of 50 studies (p < .0001). The correlations between the CP and IC muscle findings and between the CP and CT muscle findings were both stronger than the correlation between the CP and TA muscle findings (p < .0001 and p = .024, respectively). The chart review demonstrated the clinical findings to be consistent with the EMG results. The EMG studies demonstrated that CP muscle findings have the strongest correlation with IC muscle findings, followed by the CT and TA muscles. This outcome does not support theories indicating that the recurrent laryngeal nerve innervates the CP muscle in all cases.
Voils, Corrine I.; Barroso, Julie; Hasselblad, Victor; Sandelowski, Margarete
2008-01-01
Aim This paper is a discussion detailing the decisions concerning whether to include or exclude findings from a meta-analysis of report of quantitative studies of antiretroviral adherence in HIV-positive women. Background Publication constraints and the absence of reflexivity as a criterion for validity in, and reporting of, quantitative research preclude detailing the many judgements made in the course of a meta-analysis. Yet, such an accounting would assist researchers better to address the unique challenges to meta-analysis presented by the bodies of research they have targeted for review, and to show the subjectivity, albeit disciplined, that characterizes the meta-analytic process. Data sources Data were 29 published and unpublished studies on antiretroviral adherence in HIV-positive women of any race/ethnicity, class, or nationality living in the United States of America. The studies were retrieved between June 2005 and January 2006 using 40 databases. Review methods Findings were included if they met the statistical assumptions of meta-analysis, including: (1) normal distribution of observations; (2) homogeneity of variances; and (3) independence of observations. Results Relevant studies and findings were excluded because of issues related to differences in study design, different operationalizations of dependent and independent variables, multiple cuts from common longitudinal data sets, and presentation of unadjusted and adjusted findings. These reasons led to the exclusion of 73% of unadjusted relationships and 87% of adjusted relationships from our data set, leaving few findings to synthesize. Conclusion Decisions made during research synthesis studies may result in more information losses than gains, thereby obliging researchers to find ways to preserve findings that are potentially valuable for practice. PMID:17543011
Lee, Hye-Jeong; Uhm, Jae-Sun; Joung, Boyoung; Hong, Yoo Jin; Hur, Jin; Choi, Byoung Wook; Kim, Young Jin
2016-04-01
Myocardial dyskinesia caused by the accessory pathway and related reversible heart failure have been well documented in echocardiographic studies of pediatric patients with Wolff-Parkinson-White (WPW) syndrome. However, the long-term effects of dyskinesia on the myocardium of adult patients have not been studied in depth. The goal of the present study was to evaluate regional myocardial abnormalities on cardiac CT examinations of adult patients with WPW syndrome. Of 74 patients with WPW syndrome who underwent cardiac CT from January 2006 through December 2013, 58 patients (mean [± SD] age, 52.2 ± 12.7 years), 36 (62.1%) of whom were men, were included in the study after the presence of combined cardiac disease was excluded. Two observers blindly evaluated myocardial thickness and attenuation on cardiac CT scans. On the basis of CT findings, patients were classified as having either normal or abnormal findings. We compared the two groups for other clinical findings, including observations from ECG, echocardiography, and electrophysiologic study. Of the 58 patients studied, 16 patients (27.6%) were found to have myocardial abnormalities (i.e., abnormal wall thinning with or without low attenuation). All abnormal findings corresponded with the location of the accessory pathway. Patients with abnormal findings had statistically significantly decreased left ventricular function, compared with patients with normal findings (p < 0.001). The frequency of regional wall motion abnormality was statistically significantly higher in patients with abnormal findings (p = 0.043). However, echocardiography documented structurally normal hearts in all patients. A relatively high frequency (27.6%) of regional myocardial abnormalities was observed on the cardiac CT examinations of adult patients with WPW syndrome. These abnormal findings might reflect the long-term effects of dyskinesia, suggesting irreversible myocardial injury that ultimately causes left ventricular dysfunction.
Systemic mastocytosis: CT and US features of abdominal manifestations.
Avila, N A; Ling, A; Worobec, A S; Mican, J M; Metcalfe, D D
1997-02-01
To study the imaging findings in patients with systemic mastocytosis and to correlate the findings with the severity of disease on the basis of an established classification system. Pathologic findings, when available, were correlated with imaging findings. Computed tomographic (CT) and ultrasound (US) scans and corresponding pathologic findings, when available, were retrospectively reviewed in 27 patients with systemic mastocytosis. Only five (19%) of the patients in our series had normal abdominal CT and/or US examination results. Common abdominal imaging findings associated with systemic mastocytosis were hepatosplenomegaly, retroperitoneal adenopathy, periportal adenopathy, mesenteric adenopathy, thickening of the omentum and the mesentery, and ascites. Less common findings included hepatofugal portal venous flow, Budd-Chiari syndrome, cavernous transformation of the portal vein, ovarian mass, and complications such as chloroma. The findings were more common in patients with category II and those with category III disease. Abdominal findings at CT and US are common in patients with systemic mastocytosis. Although the findings in patients with systemic mastocytosis are not specific to the disease, they are useful in directing further studies for diagnostic confirmation and in estimating the extent of systemic involvement.
What Are the Parts of the Nervous System?
... Research Information Find a Study Resources and Publications Neuroscience Condition Information NICHD Research Information Find a Study ... functions does the nervous system control? Why study neuroscience? What are the areas of neuroscience? NICHD Research ...
Radiologist Agreement for Mammographic Recall by Case Difficulty and Finding Type.
Onega, Tracy; Smith, Megan; Miglioretti, Diana L; Carney, Patricia A; Geller, Berta A; Kerlikowske, Karla; Buist, Diana S M; Rosenberg, Robert D; Smith, Robert A; Sickles, Edward A; Haneuse, Sebastien; Anderson, Melissa L; Yankaskas, Bonnie
2016-11-01
The aim of this study was to assess agreement of mammographic interpretations by community radiologists with consensus interpretations of an expert radiology panel to inform approaches that improve mammographic performance. From 6 mammographic registries, 119 community-based radiologists were recruited to assess 1 of 4 randomly assigned test sets of 109 screening mammograms with comparison studies for no recall or recall, giving the most significant finding type (mass, calcifications, asymmetric density, or architectural distortion) and location. The mean proportion of agreement with an expert radiology panel was calculated by cancer status, finding type, and difficulty level of identifying the finding at the patient, breast, and lesion level. Concordance in finding type between study radiologists and the expert panel was also examined. For each finding type, the proportion of unnecessary recalls, defined as study radiologist recalls that were not expert panel recalls, was determined. Recall agreement was 100% for masses and for examinations with obvious findings in both cancer and noncancer cases. Among cancer cases, recall agreement was lower for lesions that were subtle (50%) or asymmetric (60%). Subtle noncancer findings and benign calcifications showed 33% agreement for recall. Agreement for finding responsible for recall was low, especially for architectural distortions (43%) and asymmetric densities (40%). Most unnecessary recalls (51%) were asymmetric densities. Agreement in mammographic interpretation was low for asymmetric densities and architectural distortions. Training focused on these interpretations could improve the accuracy of mammography and reduce unnecessary recalls. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Impact of problem finding on the quality of authentic open inquiry science research projects
NASA Astrophysics Data System (ADS)
Labanca, Frank
2008-11-01
Problem finding is a creative process whereby individuals develop original ideas for study. Secondary science students who successfully participate in authentic, novel, open inquiry studies must engage in problem finding to determine viable and suitable topics. This study examined problem finding strategies employed by students who successfully completed and presented the results of their open inquiry research at the 2007 Connecticut Science Fair and the 2007 International Science and Engineering Fair. A multicase qualitative study was framed through the lenses of creativity, inquiry strategies, and situated cognition learning theory. Data were triangulated by methods (interviews, document analysis, surveys) and sources (students, teachers, mentors, fair directors, documents). The data demonstrated that the quality of student projects was directly impacted by the quality of their problem finding. Effective problem finding was a result of students using resources from previous, specialized experiences. They had a positive self-concept and a temperament for both the creative and logical perspectives of science research. Successful problem finding was derived from an idiosyncratic, nonlinear, and flexible use and understanding of inquiry. Finally, problem finding was influenced and assisted by the community of practicing scientists, with whom the students had an exceptional ability to communicate effectively. As a result, there appears to be a juxtaposition of creative and logical/analytical thought for open inquiry that may not be present in other forms of inquiry. Instructional strategies are suggested for teachers of science research students to improve the quality of problem finding for their students and their subsequent research projects.
Varghese, Shainy B; Phillips, Carolyn A
2009-12-01
The overall goal of this study was to explore and describe the perceptions of advanced practice nurses (APNs) about caring while providing primary care using telehealth technology. This study used naturalistic inquiry methodology to elicit the subjective perceptions and reflections of a sample of APNs about how they convey caring in the context of telehealth. Thirteen APNs, selected by purposive and snowball sampling, participated in the study. The data for the study consisted of interviews conducted by e-mail using a semistructured interview guide. Data analysis used the constant comparison method; rigor and trustworthiness of the study procedures were established using the criteria of credibility, confirmability, dependability, and transferability. The findings of this study revealed that the APNs conveyed caring to their telehealth patients by (1) being with them, (2) personifying the images, and (3) possessing certain attributes. The major constructs that emerged from the data together formed a model of how APNs conveyed caring in telehealth. The findings provide insights and increase the understanding of how caring in telehealth was perceived by APNs. The findings of the study can make important contributions in improving our profession's preparation of future telehealth APNs. The study findings also can lend themselves to developing an instrument to measure caring in telehealth. The study findings also contribute to the nursing literature in a number of ways.
Study Finds Charter Networks Give No Clear Edge on Results
ERIC Educational Resources Information Center
Shah, Nirvi
2011-01-01
The author reports on a national study of middle school students in 40 charter networks which finds that, when it comes to having an impact on student achievement, results vary and, overall, charter students do not learn dramatically more than their counterparts in regular public schools. The findings from the research group Mathematica and the…
Women's Employment Status, Coercive Control, and Intimate Partner Violence in Mexico
ERIC Educational Resources Information Center
Villarreal, Andres
2007-01-01
Findings from previous studies examining the relation between women's employment and the risk of intimate partner violence have been mixed. Some studies find greater violence toward women who are employed, whereas others find the opposite relation or no relation at all. I propose a new framework in which a woman's employment status and her risk of…
Zhu, Lei; Ranchor, Adelita V; Helgeson, Vicki S; van der Lee, Marije; Garssen, Bert; Stewart, Roy E; Sanderman, Robbert; Schroevers, Maya J
2018-05-01
This study aimed to (1) identify benefit finding trajectories in cancer patients receiving psychological care; (2) examine associations of benefit finding trajectories with levels of and changes in psychological symptoms; and (3) examine whether socio-demographic and medical characteristics distinguished trajectories. Naturalistic longitudinal study design. Participants were 241 cancer patients receiving psychological care at specialized psycho-oncological institutions in the Netherlands. Data were collected before starting psychological care, and three and 9 months thereafter. Latent class growth analysis was performed to identify benefit finding trajectories. Five benefit finding trajectories were identified: 'high level-stable' (8%), 'very low level-small increase' (16%), 'low level-small increase' (39%), 'low level-large increase' (9%), and 'moderate level-stable' (28%). People in distinct benefit finding trajectories reported significant differential courses of depression but not of anxiety symptoms. Compared with the other four trajectories, people in the 'low level-large increase' trajectory reported the largest decreases in depression over time. Perceptions of cancer prognosis distinguished these trajectories, such that people with a favourable prognosis were more likely to belong to the 'high level-stable' trajectory, while people perceiving an uncertain prognosis were more likely to belong to the 'low level-large increase' trajectory of benefit finding. Cancer patients showed distinct benefit finding trajectories during psychological care. A small proportion reporting a large increase in benefit finding were also most likely to show decreases in depressive symptoms over time. These findings suggest a relation between perceiving benefits from cancer experience and improved psychological functioning in cancer patients receiving psychological care. Statement of contribution What is already known on this subject? People vary in course of benefit finding (BF) after trauma, with some experiencing enhanced BF and others decreased BF. Empirical studies have identified subgroups of cancer patients with distinct BF trajectories. What does this study add? This is the first study showing that cancer patients followed different BF trajectories during psychological care. Only a small proportion experienced clinically meaningful increases in BF over time. More attention is needed for cancer patients with decreased BF, as they are at a higher risk of remaining depressed. © 2017 The British Psychological Society.
Reliability of Examination Findings in Suspected Community-Acquired Pneumonia.
Florin, Todd A; Ambroggio, Lilliam; Brokamp, Cole; Rattan, Mantosh S; Crotty, Eric J; Kachelmeyer, Andrea; Ruddy, Richard M; Shah, Samir S
2017-09-01
The authors of national guidelines emphasize the use of history and examination findings to diagnose community-acquired pneumonia (CAP) in outpatient children. Little is known about the interrater reliability of the physical examination in children with suspected CAP. This was a prospective cohort study of children with suspected CAP presenting to a pediatric emergency department from July 2013 to May 2016. Children aged 3 months to 18 years with lower respiratory signs or symptoms who received a chest radiograph were included. We excluded children hospitalized ≤14 days before the study visit and those with a chronic medical condition or aspiration. Two clinicians performed independent examinations and completed identical forms reporting examination findings. Interrater reliability for each finding was reported by using Fleiss' kappa (κ) for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. No examination finding had substantial agreement (κ/ICC > 0.8). Two findings (retractions, wheezing) had moderate to substantial agreement (κ/ICC = 0.6-0.8). Nine findings (abdominal pain, pleuritic pain, nasal flaring, skin color, overall impression, cool extremities, tachypnea, respiratory rate, and crackles/rales) had fair to moderate agreement (κ/ICC = 0.4-0.6). Eight findings (capillary refill time, cough, rhonchi, head bobbing, behavior, grunting, general appearance, and decreased breath sounds) had poor to fair reliability (κ/ICC = 0-0.4). Only 3 examination findings had acceptable agreement, with the lower 95% confidence limit >0.4: wheezing, retractions, and respiratory rate. In this study, we found fair to moderate reliability of many findings used to diagnose CAP. Only 3 findings had acceptable levels of reliability. These findings must be considered in the clinical management and research of pediatric CAP. Copyright © 2017 by the American Academy of Pediatrics.
Rajalakshmi, B.R.; Manjunath, G.V.
2016-01-01
Introduction Autopsy aids to the knowledge of pathology by unveiling the rare lesions which are a source of learning from a pathologist’s perspective Some of them are only diagnosed at autopsy as they do not cause any functional derangement. This study emphasizes the various incidental lesions which otherwise would have been unnoticed during a person’s life. Aim The aim of this study was to determine the spectrum of histopathological findings including neoplastic lesions related or unrelated to the cause of death. It was also aimed to highlight various incidental and interesting lesions in autopsies. Materials and Methods A retrospective study of medicolegal autopsies for six years was undertaken in a tertiary care centre to determine the spectrum of histopathological findings including neoplastic lesions related or unrelated to the cause of death and to highlight various incidental and interesting lesions in autopsies. Statistical Analysis: Individual lesions were described in numbers and incidence in percentage. Results The study consisted of a series of 269 autopsy cases and histopathological findings were studied only in 202 cases. The commonest cause of death was pulmonary oedema. The most common incidental histopathological finding noted was atherosclerosis in 55 (27.2%) cases followed by fatty liver in 40 (19.8%) cases. Neoplastic lesions accounted for 2.47% of cases. Conclusion This study has contributed a handful of findings to the pool of rare lesions in pathology. Some of these lesions encountered which served as feast to a pathologist are tumour to tumour metastasis, a case with coexistent triple lesions, Dubin Johnson syndrome, von Meyenburg complex, Multilocular Cystic Renal Cell Carcinoma (MCRCC), Autosomal Dominant Polycystic Kidney Disease (ADPKD), liver carcinod and an undiagnosed vaso-occlusive sickle cell crisis. Autopsy studies help in the detection of unexpected findings significant enough to have changed patient management had they been recognized before death. PMID:28050373
Ronaldson, Sarah; Adamson, Joy; Dyson, Lisa; Torgerson, David
2014-10-01
Randomized controlled trials (RCTs) are widely used in health care research to provide high-quality evidence of effectiveness of an intervention. However, sometimes a study does not require an RCT in order to answer its primary objective; a case-finding design may be more appropriate. The aim of this paper was to introduce a new study design that nests a waiting list RCT within a case-finding study. An example of the new study design is the DOC Study, which primarily aims to determine the diagnostic accuracy of lung function tests for chronic obstructive pulmonary disease. It also investigates the impact of lung function tests on smoking behaviour through use of a waiting list design. The first step of the study design is to obtain participants' consent. Individuals are then randomized to one of two groups; either the 'intervention now' group or the 'intervention later' group, that is, participants are placed on a waiting list. All participants receive the same intervention; the only difference between the groups is the timing of the intervention. The design addresses patient preference issues and recruitment issues that can arise in other trial designs. Potential limitations include differential attrition between study groups and potential demoralization for the 'intervention later' group. The 'waiting list case-finding trial' design is a valuable method that could be applied to case-finding studies; the design enables the case-finding component of a study to be maintained while simultaneously exploring additional hypotheses through conducting a trial. © 2014 John Wiley & Sons, Ltd.
Writing usable qualitative health research findings.
Sandelowski, Margarete; Leeman, Jennifer
2012-10-01
Scholars in diverse health-related disciplines and specialty fields of practice routinely promote qualitative research as an essential component of intervention and implementation programs of research and of a comprehensive evidence base for practice. Remarkably little attention, however, has been paid to the most important element of qualitative studies--the findings in reports of those studies--and specifically to enhancing the accessibility and utilization value of these findings for diverse audiences of users. The findings in reports of qualitative health research are too often difficult to understand and even to find owing to the way they are presented. A basic strategy for enhancing the presentation of these findings is to translate them into thematic statements, which can then in turn be translated into the language of intervention and implementation. Writers of qualitative health research reports might consider these strategies better to showcase the significance and actionability of findings to a wider audience.
Effect of emergency department CT on neuroimaging case volume and positive scan rates.
Oguz, Kader Karli; Yousem, David M; Deluca, Tom; Herskovits, Edward H; Beauchamp, Norman J
2002-09-01
The authors performed this study to determine the effect a computed tomographic (CT) scanner in the emergency department (ED) has on neuroimaging case volume and positive scan rates. The total numbers of ED visits and neuroradiology CT scans requested from the ED were recorded for 1998 and 2000, the years before and after the installation of a CT unit in the ED. For each examination type (brain, face, cervical spine), studies were graded for major findings (those that affected patient care), minor findings, and normal findings. The CT utilization rates and positive study rates were compared for each type of study performed for both years. There was a statistically significant increase in the utilization rate after installation of the CT unit (P < .001). The fractions of studies with major findings, minor findings, and normal findings changed significantly after installation of the CT unit for facial examinations (P = .002) but not for brain (P = .12) or cervical spine (P = .24) examinations. In all types of studies, the percentage of normal examinations increased. In toto, there was a significant decrease in the positive scan rate after installation of the CT scanner (P = .004). After installation of a CT scanner in the ED, there was increased utilization and a decreased rate of positive neuroradiologic examinations, the latter primarily due to lower positive rates for facial CT scans.
Study Finds Association between Biological Marker and Susceptibility to the Common Cold
... W X Y Z Study Finds Association Between Biological Marker and Susceptibility to the Common Cold Share: © ... a cold caused by a particular rhinovirus. The biological marker identified in the study was the length ...
Hasson-Ohayon, Ilanit; Roe, David; Yanos, Philip T; Lysaker, Paul H
2016-12-01
Recent developments in mental health have emphasized recovery as an outcome for people with serious mental illness (SMI). Accordingly, several studies have attempted to evaluate the process and outcome of recovery-oriented psychosocial interventions. To review and discuss quantitative and qualitative findings from previous efforts to study the impact of five recovery-oriented interventions: Illness Management and Recovery (IMR), Narrative Enhancement and Cognitive Therapy (NECT), Supported Employment (SE), Supported Socialization (SS), and Family Psychoeducation. Reviewing the literature on studies that examine the effectiveness of these interventions by using both quantitative and qualitative approach. Qualitative findings in these studies augment quantitative findings and at times draw attention to unexpected findings and uniquely illuminate the effects of these interventions on self-reflective processes. There is a need for further exploration of how mixed-methods can be implemented to explore recovery-oriented outcomes. Critical questions regarding the implications of qualitative findings are posed.
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... or contact with the area, such as having sex, using a tampon, having a gynecological exam, or ...
Do Research Findings Apply to My Students? Examining Study Samples and Sampling
ERIC Educational Resources Information Center
Cook, Bryan G.; Cook, Lysandra
2017-01-01
Special educators are urged to use research findings to inform their instruction in order to improve student outcomes. However, it can be difficult to tell whether and how research findings apply to one's own students. In this article, we discuss how special educators can consider the samples and the sampling methods in studies to examine the…
ERIC Educational Resources Information Center
Chen, Haichun
2015-01-01
It was widely reported in China that female college students were fitter than male college students in a recent fitness study in Jiangsu Province, China. After carefully examining the finding and its related context, I believe that this specific finding was a "side effect" of the changes made in the "2014 Chinese Students Fitness…
The Educational System in Japan: Case Study Findings.
ERIC Educational Resources Information Center
Stevenson, Harold; Lee, Shin-Ying; Nerison-Low, Roberta
This document summarizes the findings of a year-long study that used case studies of specific schools in Japan to collect qualitative data on the Japanese educational experience. From 1994-95 the Case Study Project (a component of the Third International Mathematics and Science Study) collected information from interviews with students, parents,…
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... as developmental delays, vision and hearing problems, and cerebral palsy. 4 Infants born between 34 and 36 weeks ...
2012-01-01
Background Previous studies have documented strategies to promote off-label use of drugs using journal publications and other means. Few studies have presented internal company communications that discussed financial reasons for manipulating the scholarly record related to off-label indications. The objective of this study was to build on previous studies to illustrate implementation of a publication strategy by the drug manufacturer for four off-label uses of gabapentin (Neurontin®, Pfizer, Inc.): migraine prophylaxis, treatment of bipolar disorders, neuropathic pain, and nociceptive pain. Methods We included in this study internal company documents, email correspondence, memoranda, study protocols and reports that were made publicly available in 2008 as part of litigation brought by consumers and health insurers against Pfizer for fraudulent sales practices in its marketing of gabapentin (see http://pacer.mad.uscourts.gov/dc/cgi-bin/recentops.pl?filename=saris/pdf/ucl%20opinion.pdf for the Court’s findings). We reviewed documents pertaining to 20 clinical trials, 12 of which were published. We categorized our observations related to reporting biases and linked them with topics covered in internal documents, that is, deciding what should and should not be published and how to spin the study findings (re-framing study results to explain away unfavorable findings or to emphasize favorable findings); and where and when findings should be published and by whom. Results We present extracts from internal company marketing assessments recommending that Pfizer and Parke-Davis (Pfizer acquired Parke-Davis in 2000) adopt a publication strategy to conduct trials and disseminate trial findings for unapproved uses rather than an indication strategy to obtain regulatory approval. We show internal company email correspondence and documents revealing how publication content was influenced and spin was applied; how the company selected where trial findings would be presented or published; how publication of study results was delayed; and the role of ghost authorship. Conclusions Taken together, the extracts we present from internal company documents illustrate implementation of a strategy at odds with unbiased study conduct and dissemination. Our findings suggest that Pfizer and Parke-Davis’s publication strategy had the potential to distort the scientific literature, and thus misinform healthcare decision-makers. PMID:22888801
The Influence of Judgment Calls on Meta-Analytic Findings.
Tarrahi, Farid; Eisend, Martin
2016-01-01
Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non-judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non-judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non-judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.
Incidental histopathological findings in hearts of control beagle dogs in toxicity studies.
Bodié, Karen; Decker, Joshua H
2014-08-01
In preclinical studies of pharmaceutical agents, the beagle dog is a commonly used model for the detection of cardiotoxicity. Incidental findings, postmortem changes, and artifacts must be distinguished histopathologically from test item-related findings in the heart. In this retrospective analysis, cardiac sections from 88 control beagles (41 male, 47 female; ages 5-18 months) in preclinical studies were examined histopathologically. The most common finding was thickening of the tunica media of intramural coronary arteries, most likely a postmortem change. The second most common finding was the presence of vacuoles within Purkinje fibers. Dilated lymphatic and blood vessels at the insertion of chordae tendineae were noted more commonly in males than in females and were considered a normal anatomic feature. Mesothelial-lined papillary fronds along the epicardial surface of the atria were present in several dogs, as were small infiltrates of inflammatory cells usually within the myocardium. In summary, control beagles' hearts frequently have incidental findings that must be differentiated from test item-related pathologic changes. Historical control data can be useful for the interpretation of incidental and test item-related findings in the beagle heart. © 2013 by The Author(s).
Sundby, Anna; Boolsen, Merete W; Burgdorf, Kristoffer S; Ullum, Henrik; Hansen, Thomas F; Middleton, Anna; Mors, Ole
2017-10-01
Increasingly more psychiatric research studies use whole genome sequencing or whole exome sequencing. Consequently, researchers face difficult questions, such as which genomic findings to return to research participants and how. This study aims to gain more knowledge on the attitudes among potential research participants and health professionals toward receiving pertinent and incidental findings. A cross-sectional online survey was developed to investigate the attitudes among research participants toward receiving genomic findings. A total of 2,637 stakeholders responded: 241 persons with mental disorders, 671 relatives, 1,623 blood donors, 74 psychiatrists, and 28 clinical geneticists. Stakeholders wanted both pertinent findings (95%) and incidental findings (91%) to be made available for research participants. The majority (77%) stated that researchers should not actively search for incidental findings. Persons with mental disorders and relatives were generally more positive about receiving any kind of findings than clinical geneticists and psychiatrists. Compared with blood donors, persons with mental disorders reported to be more positive about receiving raw genomic data and information that is not of serious health importance. Psychiatrists and clinical geneticists were less positive about receiving genomic findings compared with blood donors. The attitudes toward receiving findings were very positive. Stakeholders were willing to refrain from receiving incidental information if it could compromise the research. Our results suggest that research participants consider themselves as altruistic participants. This study offers valuable insight, which may inform future programs aiming to develop new strategies to target issues relating to the return of findings in genomic research. © 2017 The Authors. American Journal of Medical Genetics Part A Published by Wiley Periodicals, Inc.
Longitudinal trajectories of benefit finding in adolescents with Type 1 diabetes.
Rassart, Jessica; Luyckx, Koen; Berg, Cynthia A; Oris, Leen; Wiebe, Deborah J
2017-10-01
Benefit finding, which refers to perceiving positive life changes resulting from adversity, has been associated with better psychosocial well-being in different chronic illnesses. However, little research to date has examined how benefit finding develops in the context of Type 1 diabetes (T1D). The present study aimed to identify trajectories of benefit finding across adolescence and to investigate prospective associations with depressive symptoms, self-care, and metabolic control. Adolescents with T1D aged 10 to 14 (Mage = 12.49 years, 54% girls) participated in a 4-wave longitudinal study spanning 1.5 years (N = 252 at Time 1). Adolescents filled out questionnaires on benefit finding, self-care, depressive symptoms, and illness perceptions. HbA1c values were obtained through point of care assays. We used latent growth curve modeling (LGCM) and latent class growth analysis (LCGA) to examine the development of benefit finding. Cross-lagged path analysis and multi-group LGCM were used to examine prospective associations among the study variables. Adolescents reported moderate levels of benefit finding which decreased over time. Three benefit finding trajectory classes were identified: low and decreasing, moderate and decreasing, and high and stable. These trajectory classes differed in terms of self-care, perceived personal and treatment control, and perceptions of illness cyclicality. Higher levels of benefit finding predicted relative increases in self-care 6 months later. Benefit finding was not prospectively related to depressive symptoms and metabolic control. Benefit finding may serve as a protective factor for adolescents with Type 1 diabetes and may motivate these adolescents to more closely follow their treatment regimen. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
If it is published in the peer-reviewed literature, it must be true?
Wagner, Louis K
2014-10-01
Epidemiological research correlating cancer rates in a population of patients with radiation doses from medical X-rays is fraught with confounding factors that obfuscate the likelihood that any positive relationship is causal. This is a review of four studies involving some of those confounding factors. Comparisons of findings with other studies not encumbered by similar confounding factors can enhance assertions of causation between medical X-rays and cancer rates. Even so, such assertions rest significantly on opinions of researchers regarding the degree of consistency between findings among various studies. The question as to what degree any findings truly represent cause and effect will likely still meet with controversy. The importance of these findings to medicine should therefore not lie in any controversy regarding causation, but in what the findings potentially mean with regard to benefit and risk for patients and the professional practice of medicine.
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... The 23rd pair of chromosomes are called the sex chromosomes—X and Y—because they determine whether ...
Overview of Recent Marine and Freshwater Recreational Epidemiology Studies and Their Findings
Overview of Recent Marine and Freshwater Recreational Epidemiology Studies and Their Findings Timothy J. Wade, Elizabeth A. Sams, Rich Haugland, Alfred P. Dufour The National Epidemiologic and Environmental Assessment of Recreational Water Study was conducted to address aspects...
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... kill "good" bacteria or cause irritation. Practicing safe sex can help protect against sexually transmitted forms of ...
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... button to look inside the pelvis 4 Pelvic MRI (magnetic resonance imaging) scan, an imaging test that ...
Radiologist Agreement for Mammographic Recall by Case Difficulty and Finding Type
Onega, Tracy; Smith, Megan; Miglioretti, Diana L.; Carney, Patricia A.; Geller, Berta; Kerlikowske, Karla; Buist, Diana SM; Rosenberg, Robert D.; Smith, Robert; Sickles, Edward A.; Haneuse, Sebastien; Anderson, Melissa L.; Yankaskas, Bonnie
2012-01-01
INTRODUCTIONS To assess agreement of mammography interpretations by community radiologists with consensus interpretations of an expert radiology panel, to inform approaches that improve mammography performance. METHODS From six mammography registries, 119 community-based radiologists were recruited to assess one of four randomly assigned test sets of 109 screening mammograms with comparison studies for no recall or recall, giving the most significant finding type [mass, calcifications, asymmetric density or architectural distortion] and location. The mean proportion of agreement with an expert radiology panel was calculated by cancer status, finding type, and difficulty level of identifying the finding at the woman, breast, and lesion level. We also examined concordance in finding type between study radiologists and the expert panel. For each finding type, we determined the proportion of unnecessary recalls, defined as study radiologist recalls that were not expert panel recalls. RESULTS Recall agreement was 100% for masses and for exams with obvious findings in both cancer and non-cancer cases. Among cancer cases, recall agreement was lower for lesions that were subtle (50%) or asymmetric (60%). Subtle non-cancer findings and benign calcifications showed 33% agreement for recall. Agreement for finding responsible for recall was low, especially for architectural distortions (43%) and asymmetric densities (40%). Most unnecessary recalls (51%) were asymmetric densities. CONCLUSION Agreement in mammography interpretation was low for asymmetric densities and architectural distortions. Training focused on these interpretations could improve mammography accuracy and reduce unnecessary recalls. PMID:23122345
Muehlhausen, Willie; Byrom, Bill; Skerritt, Barbara; McCarthy, Marie; McDowell, Bryan; Sohn, Jeremy
2018-01-01
To synthesize the findings of cognitive interview and usability studies performed to assess the measurement equivalence of patient-reported outcome (PRO) instruments migrated from paper to electronic formats (ePRO), and make recommendations regarding future migration validation requirements and ePRO design best practice. We synthesized findings from all cognitive interview and usability studies performed by a contract research organization between 2012 and 2015: 53 studies comprising 68 unique instruments and 101 instrument evaluations. We summarized study findings to make recommendations for best practice and future validation requirements. Five studies (9%) identified minor findings during cognitive interview that may possibly affect instrument measurement properties. All findings could be addressed by application of ePRO best practice, such as eliminating scrolling, ensuring appropriate font size, ensuring suitable thickness of visual analogue scale lines, and providing suitable instructions. Similarly, regarding solution usability, 49 of the 53 studies (92%) recommended no changes in display clarity, navigation, operation, and completion without help. Reported usability findings could be eliminated by following good product design such as the size, location, and responsiveness of navigation buttons. With the benefit of accumulating evidence, it is possible to relax the need to routinely conduct cognitive interview and usability studies when implementing minor changes during instrument migration. Application of design best practice and selecting vendor solutions with good user interface and user experience properties that have been assessed in a representative group may enable many instrument migrations to be accepted without formal validation studies by instead conducting a structured expert screen review. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario; Fubelli, Giandomenico; Giardino, Marco
2016-04-01
Encoding of geologic knowledge in formal languages is an ambitious task, aiming at the interoperability and organic representation of geological data, and semantic characterization of geologic maps. Initiatives such as GeoScience Markup Language (last version is GeoSciML 4, 2015[1]) and INSPIRE "Data Specification on Geology" (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013[2]), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG[3]) have been promoting information exchange of the geologic knowledge. There have also been limited attempts to encode the knowledge in a machine-readable format, especially in the lithology domain (see e.g. the CGI_Lithology ontology[4]), but a comprehensive ontological model that connect the several knowledge sources is still lacking. This presentation concerns the "OntoGeonous" initiative, which aims at encoding the geologic knowledge, as expressed through the standard vocabularies, schemas and data models mentioned above, through a number of interlinked computational ontologies, based on the languages of the Semantic Web and the paradigm of Linked Open Data. The initiative proceeds in parallel with a concrete case study, concerning the setting up of a synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap" (developed by the CNR Institute of Geosciences and Earth Resources, CNR IGG, Torino), where the description and classification of GeologicUnits has been supported by the modeling and implementation of the ontologies. We have devised a tripartite ontological model called OntoGeonous that consists of: 1) an ontology of the geologic features (in particular, GeologicUnit, GeomorphologicFeature, and GeologicStructure[5], modeled from the definitions and UML schemata of CGI vocabularies[6], GeoScienceML and INSPIRE, and aligned with the Planetary realm of NASA SWEET ontology[7]), 2) an ontology of the Earth materials (as defined by the SimpleLithology CGI vocabulary and aligned as a subclass of the Substance class in NASA SWEET ontology), and 3) an ontology of the MappedFeatures (as defined in the Representation sub-taxonomy of the NASA SWEET ontology). The latter correspond to the concrete elements of the map, with their geometry (polygons, lines) and geographical coordinates. The ontology model has been developed by taking into account applications primarily concerning the needs of geological mapping; nevertheless, the model is general enough to be applied to other contexts. In particular, we show how the automatic reasoning capabilities of the ontology system can be employed in tasks of unit definition and input filling of the map database and for supporting geologists in thematic re-classification of the map instances (e.g. for coloring tasks). ---------------------------------------- [1] http://www.geosciml.org [2] http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf [3] http://www.cgi-iugs.org/tech_collaboration/geoscience_terminology_working_group.html [4] https://www.seegrid.csiro.au/subversion/CGI_CDTGVocabulary/trunk/OwlWork/CGI_Lithology.owl [5] We are currently neglecting the encoding of the geologic events, left as a future work. [6] http://resource.geosciml.org/vocabulary/cgi/201211/ [7] Web site: https://sweet.jpl.nasa.gov, Di Giuseppe et al., 2013, SWEET ontology coverage for earth system sciences, http://www.ics.uci.edu/~ndigiuse/Nicholas_DiGiuseppe/Research_files/digiuseppe14.pdf; S. Barahmand et al. 2009, A Survey on SWEET Ontologies and their Applications, http://www-scf.usc.edu/~taheriya/reports/csci586-report.pdf
NASA Astrophysics Data System (ADS)
Macario, K.; Coe, H. H.; Gomes, J.; Oliveira, F.; Gomes, P.; Carvalho, C.; Linares, R.; Alves, E.; Santos, G. M.
2012-12-01
The Brazilian Southeast was formerly occupied by Atlantic forest before the arrival of Europeans in the 16th century, when deforestation slowly started to take place. To understand the variations in the vegetation of Cabo Frio during the Quaternary, and possibly identify when they roughly took place, we make use of soil phytolith identification (as proxy), stable isotopes analyses and 14C dating of soil profiles. Nowadays, those are helpful tools to reveal the palaeoenvironmental secrets hidden below-ground. The soil profile studied, which was divided in 4 horizons ranging from 10 and 115 cm in depth, was collected in the surroundings of Cabo Frio, in the Rio de Janeiro (RJ), Southeastern coast of Brazil. Its total organic carbon (TOC) varied from 0.42 to 1.11% (for the different horizons), when its δ13C values ranged from -18.81 (topsoil) to -23.72‰ (~ 80cm deep). Phytolith D/P index varied from 0.1 to 0.21. Due to the low carbon content within soil horizons, soil organic matter (SOM) fractions were chosen for isotopic analyses. Mostly of the 14C-SOM analyses were performed in a newer 14C facility, which runs a NEC 250 kV Single Stage Accelerator Mass Spectrometry system, the Radiocarbon Laboratory of the Fluminense Federal University (LAC-UFF) located in Niteroi, RJ. In brief, before measurements could be performed, the soil samples were treated with HCl 1.0M to remove carbonates, then combusted in sealed evacuated pre-baked tubes, cryogenically clean and converted to graphite (as decribed in Xu et al. 2007). In order to verify the distribution of 14C ages of different chemical soil fractions (Pessenda et al. 2001), a refractory C fraction (humin) was extracted from the topsoil horizon, and also converted to graphite following established protocols (Santos et al. 2007a,b). Due to its very low carbon mass (<<50mgC), this graphite target was processed and measured at the Keck-CCAMS Facility at University of California, Irvine. (UCI), which runs a modified NEC AMS-system (NEC 0.5MV 1.5SDH-2 AMS system). Other SOM samples from the same profile were also 14C-AMS measured. Control and background samples, subjected to the same procedures as unknown samples, were also processed and measured by both facilities. All 14C age results were calibrated using the OxCal4 software (Bronk Ramsey 2009) coupled with the Southern Hemisphere (SHCal04) dataset (McCormac et al. 2004). Phytolith and isotope preliminary analysis indicate open vegetation with few trees, and predominantly C3 grasses for the last 10 ka yrs. The deepest horizon is the one with greater phytoliths stock and D/P index, indicating a more humid environment in the beginning of the Holocene. A detailed discussion of the results will be presented. References: Bronk Ramsey, C. (2009) Radiocarbon, 51(1): 337-360 McCormac et al. (2004) Radiocarbon 46(3):1087-1092 Pessenda et al. (2001) Radiocarbon 43 (2B): 595 - 601 Santos et al. (2007a) Nuclear Instruments and Methods in Physics Research B (259): 293-302 Santos et al. (2007b) Radiocarbon 49(2): 255-269 Xu et al. (2007) Nuclear Instruments and Methods in Physics Research B 259: 320 - 329
Noninvasive Imaging of Administered Progenitor Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven R Bergmann, M.D., Ph.D.
The objective of this research grant was to develop an approach for labeling progenitor cells, specifically those that we had identified as being able to replace ischemic heart cells, so that the distribution could be followed non-invasively. In addition, the research was aimed at determining whether administration of progenitor cells resulted in improved myocardial perfusion and function. The efficiency and toxicity of radiolabeling of progenitor cells was to be evaluated. For the proposed clinical protocol, subjects with end-stage ischemic coronary artery disease were to undergo a screening cardiac positron emission tomography (PET) scan using N-13 ammonia to delineate myocardial perfusionmore » and function. If they qualified based on their PET scan, they would undergo an in-hospital protocol whereby CD34+ cells were stimulated by the administration of granulocytes-colony stimulating factor (G-CSF). CD34+ cells would then be isolated by apharesis, and labeled with indium-111 oxine. Cells were to be re-infused and subjects were to undergo single photon emission computed tomography (SPECT) scanning to evaluate uptake and distribution of labeled progenitor cells. Three months after administration of progenitor cells, a cardiac PET scan was to be repeated to evaluate changes in myocardial perfusion and/or function. Indium oxine is a radiopharmaceutical for labeling of autologous lymphocytes. Indium-111 (In-111) decays by electron capture with a t{sub ½} of 67.2 hours (2.8 days). Indium forms a saturated complex that is neutral, lipid soluble, and permeates the cell membrane. Within the cell, the indium-oxyquinolone complex labels via indium intracellular chelation. Following leukocyte labeling, ~77% of the In-111 is incorporated in the cell pellet. The presence of red cells and /or plasma reduces the labeling efficacy. Therefore, the product needed to be washed to eliminate plasma proteins. This repeated washing can damage cells. The CD34 selected product was a 90-99% pure population of leukocytes. Viability was assessed using Trypan blue histological analysis. We successfully isolated and labeled ~25-30 x 10{sup 7} CD34+ lymphocytes in cytokine mobilized progenitor cell apharesis harvests. Cells were also subjected to a stat gram stain to look for bacterial contamination, stat endotoxin LAL to look for endotoxin contamination, flow cytometry for evaluation of the purity of the cells and 14-day sterility culture. Colony forming assays confirm the capacity of these cells to proliferate and function ex-vivo with CFU-GM values of 26 colonies/ 1 x 10{sup 4} cells plated and 97% viability in cytokine augmented methylcellulose at 10-14 days in CO{sub 2} incubation. We developed a closed-processing system for the product labeling prior to infusion to maintain autologous cell integrity and sterility. Release criteria for the labeled product were documented for viability, cell count and differential, and measured radiolabel. We were successful in labeling the cells with up to 500 uCi/10{sup 8} cells, with viability of >98%. However, due to delays in getting the protocol approved by the FDA, the cells were not infused in humans in this location (although we did successfully use CD34+ cells in humans in a study in Australia). The approach developed should permit labeling of progenitor cells that can be administered to human subjects for tracking. The labeling approach should be useful for all progenitor cell types, although this would need to be verified since different cell lines may have differential radiosensitivity.« less
NASA Astrophysics Data System (ADS)
Czimczik, Claudia; Mouteva, Gergana; Simon, Fahrni; Guaciara, Santos; James, Randerson
2014-05-01
Increased fossil fuel consumption and biomass burning are contributing to significantly larger emissions of black carbon (BC) aerosols to the atmosphere. Together with organic carbon (OC), BC is a major constituent of fine particulate matter in urban air, contributes to haze and has been linked to a broad array of adverse health effects. Black carbon's high light absorption capacity and role in key (in-)direct climate feedbacks also lead to a range of impacts in the Earth system (e.g. warming, accelerated snow melt, changes in cloud formation). Recent work suggests that regulating BC emissions can play an important role in improving regional air quality and reducing future climate warming. However, BC's atmospheric transport pathways, lifetime and magnitudes of emissions by sector and region, particularly emissions from large urban centers, remain poorly constrained by measurements. Contributions of fossil and modern sources to the carbonaceous aerosol pool (corresponding mainly to traffic/industrial and biomass-burning/biogenic sources, respectively) can be quantified unambiguously by measuring the aerosol radiocarbon (14C) content. However, accurate 14C-based source apportionment requires the physical isolation of BC and OC, and minimal sample contamination with extraneous carbon or from OC charring. Compound class-specific 14C analysis of BC remains challenging due to very small sample sizes (5-15 ug C). Therefore, most studies to date have only analyzed the 14C content of the total organic carbonaceous aerosol fraction. Here, we present time-series 14C data of BC and OC from the Los Angeles (LA) metropolitan area in California - one of two megacities in the United States - and from Salt Lake City (SLC), UT. In the LA area, we analyzed 48h-PM10 samples near the LA port throughout 2007 and 2008 (with the exception of summer). We also collected monthly-PM2.5 samples at the University of California - Irvine, with shorter sampling periods during regional wildfire activity and Santa Ana winds from March to August 2013. In SLC, we seasonally collected 48h-PM2.5 samples from October 2012 to February 2014. We isolated and quantified BC and OC using a thermo-optical analyzer (RT 3080, Sunset Laboratory, Tigard, OR, USA) with the Swiss_4S protocol, and measured the 14C content of BC and OC with accelerator mass spectrometry at UCI's KCCAMS facility. We also measured the concentration and stable isotope composition of total (organic) carbon and nitrogen on the aerosol filters with EA-IRMS (Carlo Erba coupled to Finnigan DeltaPlus). Preliminary results suggest that in LA, PM10-BC concentrations are on the order of 2-8 ug C/m3. Black carbon is 14C-depleted (FM 0.04-0.21) - indicating that fossil sources dominate emissions. In comparison, OC concentrations were higher (12-17 ugC/m3) and more enriched in 14C (FM 0.54-0.83). In SLC, PM2.5-BC concentrations range from <1 to 3 ug C/m3, with the highest concentrations observed during wintertime inversions. The BC fraction is strongly 14C -depleted (FM 0.06 to 0.12) - indicating a dominance of fossil BC emissions throughout the year. Together, our measurements contribute to a comprehensive quantification of temporal and spatial variations in urban BC, a key uncertainty in constraining BC sources and transport in western North America.
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies
Kuo, Chia-Ling; Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease. PMID:25955023
Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.
Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V
2015-01-01
Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.
Primary Ovarian Insufficiency (POI)
... Research Information Find a Study Resources and Publications Turner Syndrome Condition Information NICHD Research Information Find a Study ... site for more information. Most women who have Turner syndrome develop POI. Turner syndrome is a condition in ...
How Are Uterine Fibroids Diagnosed?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... the uterus to help create the ultrasound image Magnetic resonance imaging (MRI), which uses magnets and radio waves to ...
Seeking New Treatments for Endometriosis
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... no known history of endometriosis, who undergo pelvic magnetic resonance imaging to identify whether they have endometriosis or another ...
Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.
Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L
2013-06-01
Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.
Gorter, Ramon R; van Amstel, Paul; van der Lee, Johanna H; van der Voorn, Patick; Bakx, Roel; Heij, Hugo A
2017-08-01
To determine if non-operative treatment is safe in children with acute appendicitis, we evaluated the incidence of unexpected findings after an appendectomy in children, and the influence they have on subsequent treatment. A historical cohort study (January 2004-December 2014) was performed including children, aged 0-17 years, who underwent an appendectomy for the suspicion of acute appendicitis. Patients were divided based upon histopathological examination. Unexpected findings were reviewed, as well as the subsequent treatment plan. In total 484 patients were included in this study. In the overall group, unexpected findings were noted in 10 (2.1%) patients of which two patients intra-operatively with a non-inflamed appendix (Ileitis terminalis N=1 and ovarian torsion N=1) and in 8 patients on histopathological examination. The latter group consisted of 4 patients with concomitant simple appendicitis (parasitic infection N=3 and Walthard cell rest N=1), two with concomitant complex appendicitis (carcinoid N=1 and parasitic infection N=1) and two patients with a non-inflamed appendix (endometriosis N=1 and parasitic infection N=1). Treatment was changed in 4 patients (<1%). Results from this study corroborate the safety of non-operative strategy for acute simple appendicitis, as the occurrence of unexpected findings was low, with extremely few necessary changes of the treatment plan because of serious findings. Prognosis study. Level 2 (retrospective cohort study). Copyright © 2017 Elsevier Inc. All rights reserved.
State Politics and Education: An Examination of Selected Multiple-State Case Studies.
ERIC Educational Resources Information Center
Burlingame, Martin; Geske, Terry G.
1979-01-01
Reviews the multiple-state case study literature, highlights some findings, discusses several methodological issues, and concludes with suggestions for possible research agendas. Urges students and researchers to be more actively critical of the assumptions and findings of these studies. (Author/IRT)
Granqvist, Pehr; Mikulincer, Mario; Gewirtz, Vered; Shaver, Phillip R
2012-11-01
Four studies examined implications of attachment theory for psychological aspects of religion among Israeli Jews. Study 1 replicated previous correlational findings indicating correspondence among interpersonal attachment orientations, attachment to God, and image of God. Studies 2-4 were subliminal priming experiments, which documented both normative and individual-difference effects. Regarding normative effects, findings indicated that threat priming heightened cognitive access to God-related concepts in a lexical decision task (Study 2); priming with "God" heightened cognitive access to positive, secure base-related concepts in the same task (Study 3); and priming with a religious symbol caused neutral material to be better liked (Study 4). Regarding individual differences, interpersonal attachment-related avoidance reduced the normative effects (i.e., avoidant participants had lower implicit access to God as a safe haven and secure base). Findings were mostly independent of level of religiousness. The present experiments considerably extend the psychological literature on connections between attachment constructs and aspects of religion. (c) 2012 APA, all rights reserved.
Adolescents' Declining Motivation to Learn Science: A Follow-Up Study
ERIC Educational Resources Information Center
Vedder-Weiss, Dana; Fortus, David
2012-01-01
This is a mix methods follow-up study in which we reconfirm the findings from an earlier study [Vedder-Weiss & Fortus [2011] "Journal of Research in Science Teaching, 48(2)", 199-216]. The findings indicate that adolescents' declining motivation to learn science, which was found in many previous studies [Galton [2009] "Moving to…
Urban children and nature: a summary of research on camping and outdoor education
William R., Jr. Burch
1977-01-01
This paper reports the preliminary findings of an extensive bibliographic search that identified studies or urban children in camp and outdoor education programs. These studies were systematically abstracted and classified qualitative or quantitative. Twenty-five percent of the abstracted studies were quantitative. The major findings, techniques of study, and policy...
Study Abroad: A Competitive Edge for Women?
ERIC Educational Resources Information Center
Opper, Susan
1991-01-01
Studies effects of study abroad among 172 female and 217 male graduates in the United Kingdom, France, and the Federal Republic of Germany between 1980 and 1984. Finds study abroad expedites obtaining job interviews but was of little advantage in securing employment for either sex. Finds degree credential provides a competitive edge. (NL)
ERIC Educational Resources Information Center
Im, Soo-hyun; Varma, Keisha; Varma, Sashank
2017-01-01
Background: The "seductive allure of neuroscience explanations" (SANE) is the finding that people overweight psychological arguments when framed in terms of neuroscience findings. Aim: This study extended this finding to arguments concerning the application of psychological findings to educational topics. Sample Participants (n = 320)…
Thapa, S S; Lakhey, R B; Sharma, P; Pokhrel, R K
2016-05-01
Magnetic resonance imaging is routinely done for diagnosis of lumbar disc prolapse. Many abnormalities of disc are observed even in asymptomatic patient.This study was conducted tocorrelate these abnormalities observed on Magnetic resonance imaging and clinical features of lumbar disc prolapse. A This prospective analytical study includes 57 cases of lumbar disc prolapse presenting to Department of Orthopedics, Tribhuvan University Teaching Hospital from March 2011 to August 2012. All patientshad Magnetic resonance imaging of lumbar spine and the findings regarding type, level and position of lumbar disc prolapse, any neural canal or foraminal compromise was recorded. These imaging findings were then correlated with clinical signs and symptoms. Chi-square test was used to find out p-value for correlation between clinical features and Magnetic resonance imaging findings using SPSS 17.0. This study included 57 patients, with mean age 36.8 years. Of them 41(71.9%) patients had radicular leg pain along specific dermatome. Magnetic resonance imaging showed 104 lumbar disc prolapselevel. Disc prolapse at L4-L5 and L5-S1 level constituted 85.5%.Magnetic resonance imaging findings of neural foramina compromise and nerve root compression were fairly correlated withclinical findings of radicular pain and neurological deficit. Clinical features and Magnetic resonance imaging findings of lumbar discprolasehad faircorrelation, but all imaging abnormalities do not have a clinical significance.
Gupta, Surya N; Gupta, Vikash S; White, Andrew C
2016-01-01
Intracranial incidental findings on magnetic resonance imaging (MRI) of the brain continue to generate interest in healthy control, research, and clinical subjects. However, in clinical practice, the discovery of incidental findings acts as a “distractor”. This review is based on existing heterogeneous reports, their clinical implications, and how the results of incidental findings influence clinical management. This draws attention to the followings: (1) the prevalence of clinically significant incidental findings is low; (2) there is a lack of a systematic approach to classification; and discusses (3) how to deal with the detected incidental findings based a proposed common clinical profile. Individualized neurological care requires an active discussion regarding the need for neuroimaging. Clinical significance of incidental findings should be decided based on lesion’s neuroradiologic characteristics in the given clinical context. Available evidence suggests that the outcome of an incidentally found “serious lesion in children” is excellent. Future studies of intracranial incidental findings on pediatric brain MRI should be focused on a homogeneous population. The study should address this clinical knowledge based review powered by the statistical analyses. PMID:27610341
What Are the Treatments for Congenital Adrenal Hyperplasia (CAH)?
... Find a Study Resources and Publications Infertility and Fertility About NICHD Research Information Find a Study More ... treatment very soon after birth to reduce the effects of CAH. Classic CAH is treated with steroids ...
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Causes Prader-Willi Syndrome?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
Medical Treatments for Fibroids
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
Autism Spectrum Disorder (ASD)
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Is Preterm Labor and Birth?
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... death. Infants born prematurely are at risk for cerebral palsy (a group of nervous system disorders that affect ...
How Do Health Care Providers Diagnose Pheochromocytoma?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... several imaging methods, including computed tomography (CT) and magnetic resonance imaging (MRI). CT scans use X-rays to produce ...
How Do Health Care Providers Diagnose Spina Bifida?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... an image scan such as an X-ray, MRI, or CT. « What causes it? Is there a ...
How Do Health Care Providers Diagnose Endometriosis?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... uses sound waves to make the picture, and magnetic resonance imaging (MRI) , which uses magnets and radio waves to ...
NASA Astrophysics Data System (ADS)
Tapilouw, M. C.; Firman, H.; Redjeki, S.; Chandra, D. T.
2018-05-01
To refresh natural environmental concepts in science, science teacher have to attend a teacher training. In teacher training, all participant can have a good sharing and discussion with other science teacher. This study is the first step of science teacher training program held by education foundation in Bandung and attended by 20 science teacher from 18 Junior High School. The major aim of this study is gathering science teacher’s idea of environmental concepts. The core of questions used in this study are basic competencies linked with environmental concepts, environmental concepts that difficult to explain, the action to overcome difficulties and references in teaching environmental concepts. There are four major findings in this study. First finding, most environmental concepts are taught in 7th grade. Second finding, most difficult environmental concepts are found in 7th grade. Third finding, there are five actions to overcome difficulties. Fourth finding, science teacher use at least four references in mastering environmental concepts. After all, teacher training can be a solution to reduce difficulties in teaching environmental concepts.
Critical role of seasonal tributaries for native fish and aquatic biota in the Sacramento River
NASA Astrophysics Data System (ADS)
Marchetti, M.
2016-12-01
We examined the ecology of seasonal tributaries in California in terms of native fishes and aquatic macroinvertebrates. This talk summarizes data from five individual studies. Studying juvenile Chinook growth using otolith microstructure we find that fish grow faster and larger in seasonal tributaries. In a four-year study on the abundance of native fish larvae in tributaries of the Sacramento River we find certain tributaries produce an order of magnitude more native fish larvae than nearby permanent streams. In a study comparing the distribution and abundance of aquatic macroinvertebrates in a seasonal tributary with a permanent stream we find the seasonal tributary contains unique taxa, higher drift densities and ecologically distinct communities. In a cross-watershed comparison of larval fish drift we find that a seasonal tributary produces more larvae than all other streams/rivers we examined. In a comparison of juvenile Chinook growth morphology between seasonal and permanent streams using geometric morphometrics we find that salmon show phenotypic plasticity and their growth is characteristically different in seasonal tributaries. Taken together, this body of work highlights the critical ecological importance of this habitat.
ERIC Educational Resources Information Center
Hodgson, Ann, Ed.; Spours, Ken, Ed.
This document presents and discusses case studies that examined the relationship between part-time employment and advanced level study at 15 schools in Essex, England. "Foreword" (David Jones) provides a brief overview of the project. "Finding a Balance--Fifteen Institutional Case Studies on the Relationship between Part-time Work…
National space transportation studies
NASA Technical Reports Server (NTRS)
Durocher, Cort L.; Irby, Thomas M.; Jenkins, James C.; Gorski, Raymond J.
1986-01-01
This paper describes the government and industry activities and findings in response to a Presidential directive to study second-generation space transportation systems. Topics discussed include study purpose, mission needs, architecture development, system concepts, and technology recommendations. Interim study findings will also be presented. The study is being jointly managed by DOD and NASA and equally funded by DOD, NASA, and the Strategic Defense Initiative Organization.
Kwan, Patrick; Palmini, André
2017-08-01
There is ongoing concern whether switching between different antiepileptic drug (AED) products may compromise patient care. We systematically reviewed changes in healthcare utilization following AED switch. We searched MEDLINE and EMBASE databases (1980-October 2016) for studies that assessed the effect of AED switching in patients with epilepsy on outpatient visits, emergency room visits, hospitalization and hospital stay duration. A total of 14 articles met the inclusion criteria. All were retrospective studies. Four provided findings for specific AEDs only (lamotrigine, topiramate, phenytoin and divalproex), 9 presented pooled findings from multiple AEDs, and 1 study provided both specific (lamotrigine, topiramate, oxcarbazepine, and levetiracetam) and pooled findings. Three studies found an association between a switch of topiramate and an increase in healthcare utilization. Another three studies found that a brand-to-generic lamotrigine switch was not associated with an increased risk of emergently treated events (ambulance use, ER visits or hospitalization). The outcomes of the pooled AED switch studies were inconsistent; 5 studies reported an increased healthcare utilization while 5 studies did not. Studies that have examined the association between an AED switch and a change in healthcare utilization report conflicting findings. Factors that may explain these inconsistent outcomes include inter-study differences in the type of analysis undertaken (pooled vs individual AED data), the covariates used for data adjustment, and the type of switch examined. Future medical claim database studies employing a prospective design are encouraged to address these and other factors in order to enhance inter-study comparability and extrapolation of findings. Copyright © 2017 Elsevier Inc. All rights reserved.
Reeves, B C; Langham, J; Lindsay, K W; Molyneux, A J; Browne, J P; Copley, L; Shaw, D; Gholkar, A; Kirkpatrick, P J
2007-08-01
Concern has been expressed about the applicability of the findings of the International Subarachnoid Aneurysm Trial (ISAT) with respect to the relative effects on outcome of coiling and clipping. It has been suggested that the findings of the National Study of Subarachnoid Haemorrhage may have greater relevance for neurosurgical practice. The objective of this paper was to interpret the findings of these two studies in the context of differences in their study populations, design, execution and analysis. Because of differences in design and analysis, the findings of the two studies are not directly comparable. The ISAT analysed all randomized patients by intention-to-treat, including some who did not undergo a repair, and obtained the primary outcome for 99% of participants. The National Study only analysed participants who underwent clipping or coiling, according to the method of repair, and obtained the primary outcome for 91% of participants. Time to repair was also considered differently in the two studies. The comparison between coiling and clipping was susceptible to confounding in the National Study, but not in the ISAT. The two study populations differed to some extent, but inspection of these differences does not support the view that coiling was applied inappropriately in the National Study. Therefore, there are many reasons why the two studies estimated different sizes of effect. The possibility that there were real, systematic differences in practice between the ISAT and the National Study cannot be ruled out, but such explanations must be seen in the context of other explanations relating to chance, differences in design or analysis, or confounding.
HALL, BRIAN J.; HOBFOLL, STEVAN E.; CANETTI, DAPHNA; JOHNSON, ROBERT J.; GALEA, SANDRO
2011-01-01
A study examining the effects of terrorism on a national sample of 1,136 Jewish adults was conducted in Israel via telephone surveys, during the Second Intifada. The relationship between reports of positive changes occurring subsequent to terrorism exposure (i.e., Benefit finding), posttraumatic stress disorder (PTSD) symptom severity, and negative outgroup attitudes toward Palestinian citizens of Israel (PCI) was examined. Benefit finding was related to greater PTSD symptom severity. Further, Benefit finding was related to greater threat perception of PCI and ethnic exclusionism of PCI. Findings were consistent with hypotheses derived from theories of outgroup bias and support the anxiety buffering role of social affiliation posited by terror management theory. This study suggests that benefit finding may be a defensive coping strategy when expressed under the conditions of ongoing terrorism and external threat. PMID:22058603
Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A
2012-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. Copyright © 2011 Elsevier B.V. All rights reserved.
Wagner, Karla D.; Davidson, Peter J.; Pollini, Robin A.; Strathdee, Steffanie A.; Washburn, Rachel; Palinkas, Lawrence A.
2011-01-01
Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, while conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors’ research on HIV risk among injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a needle/syringe exchange program in Los Angeles, California, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts. PMID:21680168
Problem Finding in Professional Learning Communities: A Learning Study Approach
ERIC Educational Resources Information Center
Tan, Yuen Sze Michelle; Caleon, Imelda Santos
2016-01-01
This study marries collaborative problem solving and learning study in understanding the onset of a cycle of teacher professional development process within school-based professional learning communities (PLCs). It aimed to explore how a PLC carried out collaborative problem finding--a key process involved in collaborative problem solving--that…
Cognitive Synergy in Multimedia Learning
ERIC Educational Resources Information Center
Kim, Daesang; Kim, Dong-Joong; Whang, Woo-Hyung
2013-01-01
The main focus of our study was to investigate multimedia effects that had different results from the findings of existing multimedia learning studies. First, we describe and summarize three experimental studies we conducted from 2006 to 2010. Then we analyze our findings to explore learner characteristics that may impact the cognitive processes…
Variation of Greenness across China's Universities: Motivations and Resources
ERIC Educational Resources Information Center
Zhao, Wanxia; Zou, Yonghua
2018-01-01
Purpose: This study aims to examine the cross-institutional variation in university greenness and analyze its underlying dynamics. Design/methodology/approach: This study constructs a University Greenness Index (UGI) and conducts multivariate regression. Findings: This study finds variation within two dimensions; in the vertical dimension,…
A Long-Range Study of Schizophrenia.
ERIC Educational Resources Information Center
Yahraes, Herbert
The booklet reviews the findings of a Danish longitudinal study involving 200 children (10-20 years old) at risk for schizophrenia and 100 controls. The views of the study's investigator, S. Mednick, regarding the schizophrenic Ss' learned avoidance and heightened physiological response to stress are explained. Other findings discussed include…
Reexamining the Writing Apprehension Measure
ERIC Educational Resources Information Center
Autman, Hamlet; Kelly, Stephanie
2017-01-01
This article contains two measurement development studies on writing apprehension. Study 1 reexamines the validity of the writing apprehension measure based on the finding from prior research that a second false factor was embedded. The findings from Study 1 support the validity of a reduced measure with 6 items versus the original 20-item…
Final Report of Work Done on Contract NONR-4010(03).
ERIC Educational Resources Information Center
Chapanis, Alphonse
The 24 papers listed report the findings of a study funded by the Office of Naval Research. The study concentrated on the sensory and cognitive factors in man-machine interfaces. The papers are categorized into three groups: perception studies, human engineering studies, and methodological papers. A brief summary of the most noteworthy findings in…
ERIC Educational Resources Information Center
Whitla, Dean K.; Pinck, Dan C.
Presented is a summary of findings and recommendations provided by the Harvard Study Committee under the auspices of the Massachusetts Advisory Council on Education. The study is mainly concerned with the four National Science Foundation (NSF) programs: Elementary Science Study, Science Curriculum Improvement Study, Science - A Process Approach,…
ERIC Educational Resources Information Center
Tannebaum, Rory P.
2015-01-01
This meta-ethnography explores the conceptions preservice social studies teachers have toward broad theories of democratic education. The author synthesizes and analyzes empirical research to find a consensus on preservice teachers' conceptions of the social studies. Findings suggest that social studies teacher candidates enter teacher education…
DOT National Transportation Integrated Search
2016-12-01
This research project is a continuation of a previous NITC-funded study. The first study compared the MacArthur Park TOD in Los Angeles to the : Fruitvale Village TOD in Oakland. The findings from this new study further validate the key findings from...
Coming to Journalism: A Comparative Case Study of Postgraduate Students in Dublin and Amman
ERIC Educational Resources Information Center
O'Boyle, Neil; Knowlton, Steven
2015-01-01
This article presents findings from a pilot study of postgraduate journalism students in Dublin and Amman. The study compared professional outlooks and social characteristics of students in both contexts and examined institutional settings. The study finds that journalism students in Dublin and Amman have very similar views on the profession,…
ERIC Educational Resources Information Center
Arnold, Ivo J. M.; Rowaan, Wietske
2014-01-01
In this study, the authors investigate the relationships among gender, math skills, motivation, and study success in economics and econometrics. They find that female students have stronger intrinsic motivation, yet lower study confidence than their male counterparts. They also find weak evidence for a gender gap over the entire first-year…
What Are the Symptoms of Primary Ovarian Insufficiency (POI)?
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... Hot flashes Night sweats Irritability Poor concentration Decreased sex drive Pain during sex Vaginal dryness 2 , 3 ...
Treatments for Diseases That Cause Infertility
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Symptoms of Uterine Fibroids?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Symptoms of Bacterial Vaginosis?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Symptoms of Pelvic Pain?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Treatments for Autism?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Symptoms of Vulvodynia?
... Browse AZTopics Browse A-Z Adrenal Gland Disorders Autism Spectrum Disorder (ASD) Down Syndrome Endometriosis Learning Disabilities Menstruation and ... NICHD Research Information Find a Study More Information Autism Spectrum Disorder (ASD) About NICHD Research Information Find a Study ...
What Are the Treatments for Rett Syndrome?
... NICHD Research Information Find a Study More Information Cerebral Palsy Condition Information NICHD Research Information Find a Study ... www.ncbi.nlm.nih.gov/pubmedhealth/PMH0002503 United Cerebral Palsy. (2009). Can Rett syndrome be treated? Retrieved June ...
How Do Health Care Providers Diagnose Precocious Puberty and Delayed Puberty?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... organs and blood flow in real time An MRI (magnetic resonance imaging) scan of the brain and ...
How Do Health Care Providers Diagnose Neural Tube Defects?
... NICHD Research Information Find a Study More Information Pharmacology Condition Information NICHD Research Information Find a Study ... and complications. These tests might include X-ray, magnetic resonance imaging, computed tomography scan to look for spinal defects ...
Loeber, R; Farrington, D P; Stouthamer-Loeber, M; Moffitt, T E; Caspi, A; Lynam, D
2001-12-01
This paper reviews key findings on juvenile mental health problems in boys, psychopathy, and personality traits, obtained in the first 14 years of studies using data from the Pittsburgh Youth Study. This is a study of 3 samples, each of about 500 boys initially randomly drawn from boys in the 1st, 4th, and 7th grades of public schools in Pittsburgh. The boys have been followed regularly, initially each half year, and later at yearly intervals. Currently, the oldest boys are about 25 years old, whereas the youngest boys are about 19. Findings are presented on the prevalence and interrelation of disruptive behaviors, ADHD, and depressed mood. Results concerning risk factors for these outcomes are reviewed. Psychological factors such as psychopathy, impulsivity, and personality are described. The paper closes with findings on service delivery of boys with mental health problems.
How category learning affects object representations: Not all morphspaces stretch alike
Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.
2012-01-01
How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950
Materialism and life satisfaction: the role of religion.
Rakrachakarn, Varapa; Moschis, George P; Ong, Fon Sim; Shannon, Randall
2015-04-01
This study examines the role of religion and religiosity in the relationship between materialism and life satisfaction. The findings suggests that religion may be a key factor in understanding differences in findings of previous studies regarding the inverserelationship found in the vast majority of previous studies. Based on a large-scale study in Malaysia—a country comprised of several religious subcultures (mainly Muslims, Buddhists, and Hindus), the findings suggest that the influence of religiosity on materialism and life satisfaction is stronger among Malays than among Chinese and Indians, and life satisfaction partially mediates the relationship between religiosity and materialism. The paper discusses implications for theory development and further research.